forum_id
stringlengths
9
9
sections
stringlengths
20.5k
121k
HkwoSDPgg
[{"section_index": "0", "section_name": "SEMI-SUPERVISED KNOWLEDGE TRANSFER\nFOR DEEP LEARNING FROM PRIVATE TRAINING DAT\u00a24", "section_text": "Nicolas Papernot*\nMartin Abadi\nPennsylvania State University\ngoodfellow@google.com\nSome machine learning applications involve training data that is sensitive, such\nas the medical histories of patients in a clinical trial. A model may inadvertently\nand implicitly store some of its training data; careful analysis of the model may\ntherefore reveal sensitive information.\nTo address this problem, we demonstrate a generally applicable approach to pro-\nviding strong privacy guarantees for training data: Private Aggregation of Teacher\nEnsembles (PATE). The approach combines, in a black-box fashion, multiple\nmodels trained with disjoint datasets, such as records from different subsets of\nusers. Because they rely directly on sensitive data, these models are not pub-\nlished, but instead used as \u201cteachers\u201d for a \u201cstudent\u201d model. The student learns\nto predict an output chosen by noisy voting among all of the teachers, and cannot\ndirectly access an individual teacher or the underlying data or parameters. The\nstudent\u2019s privacy properties can be understood both intuitively (since no single\nteacher and thus no single dataset dictates the student\u2019s training) and formally, in\nterms of differential privacy. These properties hold even if an adversary can not\nonly query the student but also inspect its internal workings."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Some machine learning applications with great benefits are enabled only through the analysis of\nsensitive data, such as users\u2019 personal contacts, private photographs or correspondence, or even\nmedical records or genetic sequences\n[Sweeney|[1997). Ideally, in those cases, the learning algorithms would protect the privacy of users\u2019\ntraining data, e.g., by guaranteeing that the output model generalizes away from the specifics of any\nindividual user. Unfortunately, established machine learning algorithms make no such guarantee;\nindeed, though state-of-the-art algorithms generalize well to the test set, they continue to overfit on\nspecific training examples in the sense that some of these examples are implicitly memorized.\nRecent attacks exploiting this implicit memorization in machine learning have demonstrated that\nprivate, sensitive training data can be recovered from models. Such attacks can proceed directly, by\nanalyzing internal model parameters, but also indirectly, by repeatedly querying opaque models to\ngather data for the attack\u2019s analysis. For example, [Fredrikson et al.|{2015) used hill-climbing on the\noutput probabilities of a computer-vision classifier to reveal individual faces from the training data.\n*Work done while the author was at Google.\ntWork done both at Google Brain and at OpenA:\nUlfar Erlingsson\nabadi@google.com\nulfar@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Compared with previous work, the approach imposes only weak assumptions on\nhow teachers are trained: it applies to any model, including non-convex models\nlike DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and\nSVHN thanks to an improved privacy analysis and semi-supervised learning.\nBecause of those demonstrations\u2014and because privacy guarantees must apply to worst-case out-\nliers, not only the average\u2014any strategy for protecting the privacy of training data should prudently\nassume that attackers have unfettered access to internal model parameters.\nTo protect the privacy of training data, this paper improves upon a specific, structured application oi\nthe techniques of knowledge aggregation and transfer (Breiman|[1994}, previously explored by[Nis-\n(2007), (2010), and particularly (2016). In this strategy, first.\nan SE emt of teacher models is trained on disjoint subsets of the sensitive data.\nThen, using auxiliary, unlabeled non-sensitive data, a student model is trained on the aggregate out-\nput of the ensemble, such that the student learns to accurately mimic the ensemble. Intuitively, this\nstrategy ensures that the student does not depend on the details of any single sensitive training data\npoint (e.g., of any single user), and, thereby, the privacy of the training data is protected even ii\nattackers can observe the student\u2019s internal model parameters.\nTo establish strong privacy guarantees, it is important to limit the student\u2019s access to its teachers\n30 that the student\u2019s exposure to teachers\u2019 knowledge can be meaningfully quantified and bounded\nFortunately, there are many techniques for speeding up knowledge transfer that can reduce the rate\nof student/teacher consultation during learning. We describe several techniques in this paper, the\nmost effective of which makes use of generative adversarial networks (GANs) (Goodfellow et al.\n2014) applied to semi-supervised learning, using the implementation proposed by|Salimans et al\n(2016). For clarity, we use the term PATE-G when our approach is combined with generative, semi-\nsupervised methods. Like all semi-supervised learning methods, PATE-G assumes the student ha:\naccess to additional, unlabeled data, which, in this context, must be public or non-sensitive. This\nassumption should not greatly restrict our method\u2019s applicability: even when learning on sensitive\ndata, a non-overlapping, unlabeled set of data often exists, from which semi-supervised methods car\nextract distribution priors. For instance, public datasets exist for text and images, and for medica\ndata.\nIt seems intuitive, or even obvious, that a student machine learning model will provide good privacy\nwhen trained without access to sensitive training data, apart from a few, noisy votes from a teacher\nquorum. However, intuition is not sufficient because privacy properties can be surprisingly hard\nto reason about; for example, even a single data item can greatly impact machine learning models\ntrained ona large corpus (Chaudhuri et al.|[2011). Therefore, to limit the effect of any single sensitive\ndata item on the student\u2019s learning, precisely and formally, we apply the well-established, rigorous\nstandard of differential privacy (Dwork & Roth|[2074). Like all differentially private algorithms, our\nlearning strategy carefully adds noise, so that the privacy impact of each data item can be analyzed\nand bounded. In particular, we dynamically analyze the sensitivity of the teachers\u2019 noisy votes;\nfor this purpose, we use the state-of-the-art moments accountant technique from|Abadi et al. (2016),\nwhich tightens the privacy bound when the topmost vote has a large quorum. As a result, for MNIST\nand similar benchmark learning tasks, our methods allow students to provide excellent utility, while\nour analysis provides meaningful worst-case guarantees. In particular, we can bound the metric for\nprivacy loss (the differential-privacy \u00a2) to a range similar to that of existing, real-world privacy-\nprotection mechanisms, such as Google\u2019s RAPPOR {Exlingsson et al.|[2014}.\nThis paper shows how this strategy\u2019s privacy guarantees can be strengthened by restricting student\nraining to a limited number of teacher votes, and by revealing only the topmost vote after care-\n\u2018ully adding random noise. We call this strengthened strategy PATE, for Private Aggregation oj\nfeacher Ensembles, Furthermore, we introduce an improved privacy analysis that makes the strat-\nogy generally applicable to machine learning algorithms with high utility and meaningful privacy\nzuarantees\u2014in particular, when combined with semi-supervised learning.\nFinally, it is an important advantage that our learning strategy and our privacy analysis do not depend\non the details of the machine learning techniques used to train either the teachers or their student.\nTherefore, the techniques in this paper apply equally well for deep learning methods, or any such\nlearning methods with large numbers of parameters, as they do for shallow, simple techniques.\nIn comparison, guarantee privacy only conditionally, for a restricted class of\nstudent classifiers\u2014in effect, limiting applicability to logistic regression with convex loss. Also,\nunlike the methods of {2016}, which represent the state-of-the-art in differentially-\nprivate deep learning, our techniques make no assumptions about details such as batch selection, the\nloss function, or the choice of the optimization algorithm, Even so, as we show in experiments on\nFigure 1: Overview of the approach: (1) an ensemble of teachers is trained on disjoint subsets of the\nsensitive data, (2) a student model is trained on public data labeled using the ensemble.\nMNIST and SVHN, our techniques provide a privacy/utility tradeoff that equals or improves upor\nbespoke learning methods such as those of[Abadi et al.}{2016).\nSection[5] further discusses the related work. Building on this related work, our contributions are as\nfollows:\nOur results are encouraging, and highlight the benefits of combining a learning strategy based on\nsemi-supervised knowledge transfer with a precise, data-dependent privacy analysis. However, the\nmost appealing aspect of this work is probably that its guarantees can be compelling to both an expert\nand a non-expert audience. In combination, our techniques simultaneously provide both an intuitive\nand a rigorous guarantee of training data privacy, without sacrificing the utility of the targeted model.\nThis gives hope that users will increasingly be able to confidently and safely benefit from machine\nlearning models built from their sensitive data.\nIn this section, we introduce the specifics of the PATE approach, which is illustrated in Figure]\nWe describe how the data is partitioned to train an ensemble of teachers, and how the predictions\nmade by this ensemble are noisily aggregated. In addition, we discuss how GANs can be used in\ntraining the student, and distinguish PATE-G variants that improve our approach using generative.\nsemi-supervised methods.\nNot accessible by ativersary I Accessible by adversary\np\nSensitive * Aggregate .\n> tudent 4: + Queries\n\nData - p Teacher I \u00b0\n\\ 4 Predicted 4 Incomplete\nP completion [| Public Data\n\n\u2014_ Training sees \u00abBe Prediction \u2014-\u2014 > Data feeding\ne We demonstrate a general machine learning strategy, the PATE approach, that provides dif-\nferential privacy for training data in a \u201cblack-box\u201d manner, i.e., independent of the learning\nalgorithm, as demonstrated by Section[4]and Appendix[C]\n\ne We improve upon the strategy outlined in[Hamm et al,] (2016) for learning machine models\nthat protect training data privacy. In particular, our student only accesses the teachers\u2019 top\nvote and the model does not need to be trained with a restricted class of convex losses.\n\ne We explore four different approaches for reducing the student\u2019s dependence on its teachers,\nand show how the application of GANs to semi-supervised learning of\ncan greatly reduce the privacy loss by radically reducing the need for supervision.\n\nWe present a new application of the moments accountant technique from[Abadi et al. {2016}\nfor improving the differential-privacy analysis of knowledge transfer, which allows the\ntraining of students with meaningful privacy bounds.\n\ne We evaluate our framework on MNIST and SVHN, allowing for a comparison of our results\nwith previous differentially private machine learning methods. Our classifiers achieve an\n(e, 6) differential-privacy bound of (2.04, 10~*) for MNIST and (8.19, 10~\u00b0) for SVHN,\nrespectively with accuracy of 98.00% and 90.66%. In comparison, for MNIST,\not ci a looser (8, 1075) privacy bound and 97% accuracy. For et Ree\nreport approx. 92% accuracy with \u00a2 > 2 per each of 300,000 model pa-\nrameters, naively making the total \u00a2 > 600,000, which guarantees no meaningful privacy.\n\ne Finally, we show that the PATE approach can be successfully applied to other model struc-\ntures and to datasets with different characteristics. In particular, in Appendix |C] PATE\nprotects the privacy of medical data used to train a model based on random forests.\nData partitioning and teachers: Instead of training a single model to solve the task associated with\ndataset (X,Y), where X denotes the set of inputs, and Y the set of labels, we partition the data inn\ndisjoint sets (X,,, Y;,) and train a model separately on each set. As evaluated in SectionfA.1 assum-\ning that 7 is not too large with respect to the dataset size and task complexity, we obtain n classifiers\nf; called teachers. We then deploy them as an ensemble making predictions on unseen inputs x by\nquerying each teacher for a prediction f;(x) and aggregating these into a single prediction.\nAggregation: The privacy guarantees of this teacher ensemble stems from its aggregation. Let m\nbe the number of classes in our task. The label count for a given class \u00a5 \u20ac [rm] and an input 2 is\nthe number of teachers that assigned class j to input Z: nj;(#) = |{i: 4 \u20ac [n], f,(@) = j}|. If we\nsimply apply plurality\u2014use the label with the largest count\u2014the ensemble\u2019s decision may depend\non a single teacher\u2019s vote. Indeed, when two labels have a vote count differing by at most one, there\nis a tie: the aggregated output changes if one teacher makes a different prediction. We add random\nnoise to the vote counts 7, to introduce ambiguity:\nWhile we could use an f such as above to make predictions, the noise required would increase as we\nmake more predictions, making the model useless after a bounded number of queries. Furthermore.\nprivacy guarantees do not hold when an adversary has access to the model parameters. Indeed.\nas each teacher f; was trained without taking into account privacy, it is conceivable that they have\nsufficient capacity to retain details of the training data. To address these limitations, we train anothei\nmodel, the student, using a fixed number of labels predicted by the teacher ensemble.\nWe train a student on nonsensitive and unlabeled data, some of which we label using the aggregation\nmechanism. This student model is the one deployed, in lieu of the teacher ensemble, so as to fix the\nprivacy loss to a value that does not grow with the number of user queries made to the student model.\nIndeed, the privacy loss is now determined by the number of queries made to the teacher ensemble\nduring student training and does not increase as end-users query the deployed student model. Thus,\nthe privacy of users who contributed to the original training dataset is preserved even if the student\u2019s\narchitecture and parameters are public or reverse-engineered by an adversary.\nWe considered several techniques to trade-off the student model\u2019s quality with the number of labels\nit needs to access: distillation, active learning, semi-supervised learning (see Appendix[B). Here, we\nonly describe the most successful one, used in PATE-G: semi-supervised learning with GANs.\nTraining the student with GANs: The GAN framework involves two machine learning models,\na generator and a discriminator. They are trained in a competing fashion, in what can be viewed\nas a two-player game (2014. The generator produces samples from the data\ndistribution by transforming vectors sampled from a Gaussian distribution. The discriminator is\ntrained to distinguish samples artificially produced by the generator from samples part of the real\ndata distribution. Models are trained via simultaneous gradient descent steps on both players\u2019 costs.\nIn practice, these dynamics are often difficult to control when the strategy set is non-convex (e.g., a\nDNN). In their application of GANs to semi-supervised learning, [Salimans et al.| (2016) made the\nfollowing modifications. The discriminator is extended from a binary classifier (data vs. generator\nsample) to a multi-class classifier (one of k classes of data samples, plus a class for generated\nsamples). This classifier is then trained to classify labeled real samples in the correct class, unlabeled\nreal samples in any of the k classes, and the generated samples in the additional class.\nIn this equation, \u00a5 is a privacy parameter and Lap(b) the Laplacian distribution with location 0 and\nscale 6. The parameter + influences the privacy guarantee we can prove. Intuitively, a large -y leads\nto a strong privacy guarantee, but can degrade the accuracy of the labels, as the noisy maximum f\nabove can differ from the true plurality.\nAlthough no formal results currently explain why yet, the technique was empirically demonstrated\nto greatly improve semi-supervised learning of classifiers on several datasets, especially when the\nclassifier is trained with feature matching loss (Salimans et al.|[2016}.\nTraining the student in a semi-supervised fashion makes better use of the entire data available to the\nstudent, while still only labeling a subset of it. Unlabeled inputs are used in unsupervised learning\nto estimate a good prior for the distribution. Labeled inputs are then used for supervised learning.\nWe now analyze the differential privacy guarantees of our PATE approach. Namely, we keep track\nof the privacy budget throughout the student\u2019s training using the moments accountant qAbadi et al.\n2016). When teachers reach a strong quorum, this allows us to bound privacy costs more strictly.\nDifferential privacy has established itself as a strong standard\nIt provides privacy guarantees for algorithms analyzing databases, which in our case is a machin\nlearning training algorithm processing a training dataset. Differential privacy is defined using pair:\nof adjacent databases: in the present work, these are datasets that only differ by one training example\nRecall the following variant of differential privacy introduced in/Dwork et al.|(2006a).\nDefinition 1. A randomized mechanism M with domain D and range R satisfies (e, 6)-differential\nprivacy if for any two adjacent inputs d, d' \u20ac D and for any subset of outputs S C R it holds that:\nThe privacy loss random variable C(M, aux, d, d') is defined as e(M(d); M, aux, d, d\u2019), ie. the\nrandom variable defined by evaluating the privacy loss at an outcome sampled from M(d).\nA natural way to bound our approach\u2019s privacy loss is to first bound the privacy cost of each label\nqueried by the student, and then use the strong composition theorem to derive\nthe total cost of training the student. For neighboring databases d, d\u2019, each teacher gets the same\ntraining data partition (that is, the same for the teacher with d and with d\u2019, not the same across\nteachers), with the exception of one teacher whose corresponding training data partition differs.\nTherefore, the label counts n;(Z) for any example @, on d and d\u2019 differ by at most 1 in at most two\nlocations. In the next subsection, we show that this yields loose guarantees.\nTo better keep track of the privacy cost, we use recent advances in privacy cost accounting. The\nmoments accountant was introduced by |Abadi et al.| (2016), building on previous work (Bun &\n2016} |[Dwork & Rothblum| [2016 2016).\nDefinition 3. Let M: D \u2014 R be a randomized mechanism and d, d' a pair of adjacent database.\nLet aux denote an auxiliary input. The moments accountant is defined as:\nThe following properties of the moments accountant are proved in|Abadi et al.]{2016).\nPr[M(d) \u20ac $] < e\u00ae Pr[M(d\u2019) \u20ac S] +5.\nnd Pr[M (aux, d) = o] /\nclo; M, aux, d,d\u2019) = log Pri M(aux, @!) = of\nana(A) & max, am (A; aux, dd\u2019)\nwhere a:n(A; aux, d, d\u2019) 3 log E[exp(AC'(M, aux, d, d\u2019))] is the moment generating function of\nthe privacy loss random variable.\nTheorem 1. 1. [Composability] Suppose that a mechanism M consists of a sequence of adap-\ntive mechanisms Mj,..., Mz where Mj: Wa Ry x D \u2014 R,. Then, for any output sequence\nO1,-+-,0%\u20141 and any\nk\nam(A;d,d\u2019) = Sram 01,-.-, 04-1, 4,d')\ni=1\n\u00e96= min exp(aai() \u2014 Xe).\nWe write down two important properties of the aggregation mechanism from Section eB The first\nproperty is proved in[Dwork & Roth] {2014}, and the second follows from[Bun & Steinke|{2016}.\nThe following theorem, proved in Appendix [A] provides a data-dependent bound on the moments\nof any differentially private mechanism where some specific outcome is very likely.\n1- t\narts aux, dd\u2019) <log((1\u2014 a) (Gag) + aexP2y))-\nTo upper bound g for our aggregation mechanism, we use the following simple lemma, also proved\nin Appendix[A]\nLemma 4. Let n be the label score vector for a database d with nj\u00bb > n; for all j. Then\nThis allows us to upper bound g for a specific score vector n, and hence bound specific moments, We\ntake the smaller of the bounds we get from Theorems|2]and[3] We compute these moments for a few\nvalues of A (integers up to 8). Theorem[I|allows us to add these bounds over successive steps, and\nderive an (\u20ac, 5) guarantee from the final a. Interested readers are referred to the script that we used\nto empirically compute these bounds, which is released along with our code: [https://qithub.\na(l; aux, d,d\u2019) < 2771(1+ 1)\nAt each step, we use the aggregation mechanism with noise Lap(+) which is (27, 0)-DP. Thus over\nT steps, we get (4T'y? + 2y,/2T In 4, 6)-differential privacy. This can be rather large: plugging\nin values that correspond to our SVHN result, y = 0.05, 7 = 1000, 6 = le\u20146 gives us e ~ 26 or\nalternatively plugging in values that correspond to our MNIST result, y = 0.05, T = 100, 6 = 1le\u20145\ngives us \u20ac 5.80.\nOur data-dependent privacy analysis takes advantage of the fact that when the quorum among the\nteachers is very strong, the majority outcome has overwhelming likelihood, in which case the pri-\nvacy cost is small whenever this outcome occurs. The moments accountant allows us analyze the\ncomposition of such mechanisms in a unified framework.\n2+ ylnje \u2014 05)\nPM) APS D Fexpeyenys = 4))\nSince the privacy moments are themselves now data dependent, the final \u00a2 is itself data-dependen'\nand should not be revealed. To get around this, we bound the smooth sensitivity\nof the moments and add noise proportional to it to the moments themselves. This gives us <\ndifferentially private estimate of the privacy cost. Our evaluation in Section[JJignores this overheac\nand reports the un-noised values of \u00a2. Indeed, in our experiments on MNIST and SVHN, the scale\nof the noise one needs to add to the released \u00a2 is smaller than 0.5 and 1.0 respectively.\nHow does the number of teachers affect the privacy cost? Recall that the student uses a noisy label\ncomputed in (1) which has a parameter -y. To ensure that the noisy label is likely to be the correct\none, the noise scale ; should be small compared to the the additive gap between the two largest\nvales of n;. While the exact dependence of y on the privacy cost in TheoremBlis subtle, as a general\nprinciple, a smaller -y leads to a smaller privacy cost. Thus, a larger gap translates to a smaller\nprivacy cost. Since the gap itself increases with the number of teachers, having more teachers would\nlower the privacy cost. This is true up to a point. With n teachers, each teacher only trains on a 2\nfraction of the training data, For large enough n, each teachers will have too little training data to be\naccurate."}, {"section_index": "3", "section_name": "4 EVALUATION", "section_text": "Tn our evaluation of PATE and its generative variant PATE-G, we first train a teacher ensemble for\neach dataset. The trade-off between the accuracy and privacy of labels predicted by the ensemble\nis greatly dependent on the number of teachers in the ensemble: being able to train a large set of\nteachers is essential to support the injection of noise yielding strong privacy guarantees while having\na limited impact on accuracy. Second, we minimize the privacy budget spent on learning the student\nby training it with as few queries to the ensemble as possible.\nOur experiments use MNIST and the extended SVHN datasets. Our MNIST model stacks two\nconvolutional layers with max-pooling and one fully connected layer with ReLUs. When trained on\nthe entire dataset, the non-private model has a 99.18% test accuracy. For SVHN, we add two hidden\nlayers|]] The non-private model achieves a 92.8% test accuracy, which is shy of the state-of-the-art.\nHowever, we are primarily interested in comparing the private student\u2019s accuracy with the one of a\nnon-private model trained on the entire dataset, for different privacy guarantees. The source code\nfor reproducing the results in this section is available on GitHub]\nAs mentioned above, compensating the noise introduced by the Laplacian mechanism presented it\nEquation[]requires large ensembles. We evaluate the extent to which the two datasets considered cat\nbe partitioned with a reasonable impact on the performance of individual teachers, Specifically, we\nshow that for MNIST and SVHN, we are able to train ensembles of 250 teachers. Their aggregatec\npredictions are accurate despite the injection of large amounts of random noise to ensure privacy\nThe aggregation mechanism output has an accuracy of 93.18% for MNIST and 87.79% for SVHN\nwhen evaluated on their respective test sets, while each query has a low privacy budget of e = 0.05\nPrediction accuracy: All other things being equal, the number 7 of teachers is limited by a trade-\noff between the classification task\u2019s complexity and the available data. We train n teachers by\npartitioning the training data n-way. Larger values of n lead to larger absolute gaps, hence poten-\ntially allowing for a larger noise level and stronger privacy guarantees. At the same time, a larger\nn implies a smaller training dataset for each teacher, potentially reducing the teacher accuracy. We\nempirically find appropriate values of n for the MNIST and SVHN datasets by measuring the test\nTo conclude, we note that our analysis is rather conservative in that it pessimistically assumes that,\neven if just one example in the training set for one teacher changes, the classifier produced by that\nteacher may change arbitrarily. One advantage of our approach, which enables its wide applica-\nbility, is that our analysis does not require any assumptions about the workings of the teachers.\nNevertheless, we expect that stronger privacy guarantees may perhaps be established in specific\nsettings\u2014when assumptions can be made on the learning algorithm used to train the teachers.\n100,\ngs \u2014\u2014\u2014\u2014\nov fr fo\" .\n& 80} of of oe\n2 ot fy Ce\nBooty Cee\n& sold | Li-*) = MNIST (n=10)\nBale lm \u00bb\u2014\u00ab MNIST (n=100)\n$ \u201cry ie \u00bb\u2014* MNIST (n=250)\n> 30t] fo >\u00bb SVHN (n=10)\nSag i ho\u00bb SVHN (n=100}\ng ee poe SVHN (n=250}\n\n486 OL 0.2 03 0.4 Os\n\nYY per label query\nFigure 2: How much noise can be injected\nto a query? Accuracy of the noisy aggrega-\ntion for three MNIST and SVHN teacher en-\nsembles and varying value per query. The\nnoise introduced to achieve a given +y scales\ninversely proportionally to the value of +:\nsmall values of -y on the left of the axis corre-\nspond to large noise amplitudes and large -y\nvalues on the right to small noise.\nPrediction confidence: As outlined in Section[3} the privacy of predictions made by an ensembl\nof teachers intuitively requires that a quorum of teachers generalizing well agree on identical labels\nThis observation is reflected by our data-dependent privacy analysis, which provides stricter privac\nbounds when the quorum is strong. We study the disparity of labels assigned by teachers. In othe\nwords, we count the number of votes for each possible label, and measure the difference in vote\nbetween the most popular label and the second most popular label, i-e., the gap. If the gap is smal:\nintroducing noise during aggregation might change the label assigned from the first to the seconc\nFigure [3] shows the gap normalized by the total number of teachers n. As n increases, the ga\nremains larger than 60% of the teachers, allowing for aggregation mechanisms to output the correc\nlabel in the presence of noise.\nNoisy aggregation: For MNIST and SVHN, we consider three ensembles of teachers with varying\nnumber of teachers n \u20ac {10, 100, 250}. For each of them, we perturb the vote counts with Laplacian\nnoise of inversed scale -y ranging between 0.01 and 1. This choice is justified below in Section[.2|\nWe report in Figure[|the accuracy of test set labels inferred by the noisy aggregation mechanism for\nthese values of \u00a2. Notice that the number of teachers needs to be large to compensate for the impact\nof noise injection on the accuracy.\n100,\ng\n2\n3\nSs 80\n3\n\u00a3\n6\n5 60\n3\n=e\n5\n2\nB 40\n3\n3\nBX\n3\nE 20\n2 |{c nist\n& |[co sven\noboe oe\n1 2 3 4 5 10 25 50 100 250\nSlumhar af taachare\nFigure 3: How certain is the aggregation of\nteacher predictions? Gap between the num-\nber of votes assigned to the most and second\nmost frequent labels normalized by the num-\nber of teachers in an ensemble. Larger gaps\nindicate that the ensemble is confident in as-\nsigning the labels, and will be robust to more\nnoise injection. Gaps were computed by av-\neraging over the test data.\nset accuracy of each teacher trained on one of the n partitions of the training data. We find that even\nfor n = 250, the average test accuracy of individual teachers is 83.86% for MNIST and 83.18% for\nSVHN. The larger size of SVHN compensates its increased task complexity.\nThe noisy aggregation mechanism labels the student\u2019s unlabeled training set in a privacy-preserving\nfashion. To reduce the privacy budget spent on student training, we are interested in making as few\nlabel queries to the teachers as possible. We therefore use the semi-supervised training approach de-\nscribed previously. Our MNIST and SVHN students with (<, 6) differential privacy of (2.04, 10-5)\nand (8.19, 10\u2014\u00b0) achieve accuracies of 98.00% and 90.66%. These results improve the differential\nprivacy state-of-the-art for these datasets. previously obtained 97% accurac\nwith a (8, 10\u2014\u00b0) bound on MNIST, starting from an inferior baseline model without privacy. [Shokri\n& Shmatikov reported about 92% accuracy on SVHN with \u00a2 > 2 per model parameter and a\nmodel with over 300,000 parameters. Naively, this corresponds to a total e > 600,000.\nFigure 4: Utility and privacy of the semi-supervised students: each row is a variant of the stu\ndent model trained with generative adversarial networks in a semi-supervised way, with a differen\nnumber of label queries made to the teachers through the noisy aggregation mechanism. The las\ncolumn reports the accuracy of the student and the second and third column the bound \u00a2 and failure\nprobability 6 of the (\u00a2, 5) differential privacy guarantee.\nWe apply semi-supervised learning with GANs to our problem using the following setup for eacl\ndataset. In the case of MNIST, the student has access to 9,000 samples, among which a subse\nof either 100, 500, or 1,000 samples are labeled using the noisy aggregation mechanism discusse:\nin Section 2.1] Its performance is evaluated on the 1,000 remaining samples of the test set. Not\nthat this may increase the variance of our test set accuracy measurements, when compared to thos:\ncomputed over the entire test data. For the MNIST dataset, we randomly shuffle the test set to ensun\nthat the different classes are balanced when selecting the (small) subset labeled to train the student\nFor SVHN, the student has access to 10,000 training inputs, among which it labels 500 or 1,00!\nsamples using the noisy aggregation mechanism. Its performance is evaluated on the remainin;\n16,032 samples. For both datasets, the ensemble is made up of 250 teachers. We use Laplacian scal:\nof 20 to guarantee an individual query privacy bound of \u00a2 = 0.05. These parameter choices ar\nmotivated by the results from Section[4.1]\nIn Figure [A] we report the values of the (\u00a2, 6) differential privacy guarantees provided and the cor-\nresponding student accuracy, as well as the number of queries made by each student. The MNIST\nstudent is able to learn a 98% accurate model, which is shy of 1% when compared to the accuracy\nof a model learned with the entire training set, with only 100 label queries. This results in a strict\ndifferentially private bound of \u00a2 = 2.04 for a failure probability fixed at 10\u2014>. The SVHN stu-\ndent achieves 90.66% accuracy, which is also comparable to the 92.80% accuracy of one teacher\nlearned with the entire training set. The corresponding privacy bound is \u00a2 = 8.19, which is higher\nthan for the MNIST dataset, likely because of the larger number of queries made to the aggregation\nmechanism,\nSeveral privacy definitions are found in the literature. For instance, k-anonymity requires information\nabout an individual to be indistinguishable from at least k \u2014 1 other individuals in the dataset CC\n[Sweeney][2002}. However, its lack of randomization gives rise to caveats (Dwork & Roth] 2014}, and\nattackers can infer properties of the dataset (Aggarwal][2005). An alternative definition, differential\nprivacy, established itself as a rigorous standard for providing privacy guarantees\n[2006b). In contrast to k-anonymity, differential privacy is a property of the randomized algorithm\nand not the dataset itself.\nA variety of approaches and mechanisms can guarantee differential privacy.\nshowed that randomized response, introduced. by [Warner] {1965}, can protect crowd-sourced data\ncollected from software users to compute statistics about user behaviors. Attempts to provide dif-\nferential privacy for machine learning models led to a series of efforts on shallow machine learning\nmodels, including work by (2014); (2009); [Pathak et al.\n2011}; [Song et al.]{2013), and[Wainwright et al.](2012)-\nDataset |< 6 Queries | Non-Private Baseline | Student Accuracy\nMNIST 98.00%\nMNIST 98.10%\nSVHN eM 82.72%\nSVHN | 8.19 | 1076 1000 92.80% 90.66%\nWe observe that our private student outperforms the aggregation\u2019s output in terms of accuracy, with\nor without the injection of Laplacian noise. While this shows the power of semi-supervised learning,\nthe student may not learn as well on different kinds of data (e.g., medical data), where categories are\nnot explicitly designed by humans to be salient in the input space. Encouragingly, as Appendix[C\nillustrates, the PATE approach can be successfully applied to at least some examples of such data.\nA privacy-preserving distributed SGD algorithm was introduced by[Shokri & Shmatikov| (2015p. I\napplies to non-convex models. However, its privacy bounds are given per-parameter, and the larg\u00ab\nnumber of parameters prevents the technique from providing a meaningful privacy guarantee. [Abad\nprovided stricter bounds on the privacy loss induced by a noisy SGD by introducing the\nmoments accountant. In comparison with these efforts, our work increases the accuracy of a private\nMNIST model from 97% to 98% while improving the privacy bound \u00a2 from 8 to 1.9. Furthermore\nthe PATE approach is independent of the learning algorithm, unlike this previous work. Suppor\nfor a wide range of architecture and training algorithms allows us to obtain good privacy bound:\non an accurate and private SVHN model. However, this comes at the cost of assuming that non:\nprivate unlabeled data is available, an assumption that is not shared by Shokri &\nShmatikov][2015}.\nfirst discussed secure multi-party aggregation of locally trained classifiers for a\nglobal classifier hosted by a trusted third-party. proposed the use of knowledge\ntransfer between a collection of models trained on individual devices into a single model guaran-\nteeing differential privacy. Their work studied linear student models with convex and continuously\ndifferentiable losses, bounded and c-Lipschitz derivatives, and bounded features. The PATE ap-\nproach of this paper is not constrained to such applications, but is more generally applicable.\nPrevious work also studied semi-supervised knowledge transfer from private models. For instance\nlearned privacy-preserving random forests. A key difference is that thei\napproach is tailored to decision trees. PATE works well for the specific case of decision trees, a:\ndemonstrated in Appendix[C] and is also applicable to other machine learning algorithms, including\nmore complex ones. Another key difference is that [Tagannathan et al] (2013) modified the classic\nmodel of a decision tree to include the Laplacian mechanism. Thus, the privacy guarantee does\nnot come from the disjoint sets of training data analyzed by different decision trees in the randor\nforest, but rather from the modified architecture. In contrast, partitioning is essential to the privacy\nguarantees of the PATE approach.\nTo protect the privacy of sensitive training data, this paper has advanced a learning strategy and <\ncorresponding privacy analysis. The PATE approach is based on knowledge aggregation and transfei\nfrom \u201cteacher\u201d models, trained on disjoint data, to a \u201cstudent\u201d model whose attributes may be mad\u00ab\npublic. In combination, the paper\u2019s techniques demonstrably achieve excellent utility on the MNIST\nand SVHN benchmark tasks, while simultaneously providing a formal, state-of-the-art bound or\nusers\u2019 privacy loss. While our results are not without limits\u2014e.g., they require disjoint training\ndata for a large number of teachers (whose number is likely to increase for tasks with many outpu\nclasses)\u2014they are encouraging, and highlight the advantages of combining semi-supervised learn.\ning with precise, data-dependent privacy analysis, which will hopefully trigger further work. Ir\nparticular, such future work may further investigate whether or not our semi-supervised approact\nwill also reduce teacher queries for tasks other than MNIST and SVHN, for example when the\ndiscrete output categories are not as distinctly defined by the salient input space features.\nA key advantage is that this paper\u2019s techniques establish a precise guarantee of training data pri-\nvacy in a manner that is both intuitive and rigorous. Therefore, they can be appealing, and easily\nexplained, to both an expert and non-expert audience. However, perhaps equally compelling are the\ntechniques\u2019 wide applicability. Both our learning approach and our analysis methods are \u201cblack-\nbox,\u201d i.e., independent of the learning algorithm for either teachers or students, and therefore apply,\nin general, to non-convex, deep learning, and other learning methods. Also, because our techniques\ndo not constrain the selection or partitioning of training data, they apply when training data is natu-\nrally and non-randomly partitioned\u2014e.g., because of privacy, regulatory, or competitive concerns\u2014\nor when each teacher is trained in isolation, with a different method. We look forward to such further\napplications, for example on RNNs and other sequence-based models."}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "Nicolas Papernot is supported by a Google PhD Fellowship in Security. The authors would like tc\nthank Ilya Mironov and Li Zhang for insightful discussions about early drafts of this document."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Dana Angluin. Queries and concept learning. Machine learning, 2(4):319-342, 1988.\nRaef Bassily, Adam Smith, and Abhradeep Thakurta. Differentially private empirical risk minimiza-\ntion: efficient algorithms and tight error bounds. arXiv preprint arXiv: 1405.7085, 2014.\nEric B Baum. Neural net algorithms that learn in polynomial time from examples and queries. /EE]\nTransactions on Neural Networks, 2(1):5-19, 1991.\nLeo Breiman. Bagging predictors. Machine Learning, 24(2):123-140, 1994.\nKamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical\nrisk minimization. Journal of Machine Learning Research, 12(Mar):1069-1109, 2011.\nThomas G Dietterich. Ensemble methods in machine learning. In International workshop on multi.\nple classifier systems, pp. 1-15. Springer, 2000.\nJane Bromley, James W Bentz, L\u00e9on Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard\nSackinger, and Roopak Shah. Signature verification using a \u201cSiamese\u201d time delay neural network.\nTnternatianal Tournal af Pattorn Recnonition and Artificial Intelliconce THMAY6AO6RR 1903\nCynthia Dwork. A firm foundation for private data analysis. Communications of the ACM, 54(1):\n86-95, 2011.\n\nCynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations\nand Trends in Theoretical Computer Science, 9(3-4):21 1-407, 2014.\n\nCynthia Dwork and Guy N Rothblum. Concentrated differential privacy. arXiv preprint\narXiv: 1603.01887, 2016.\n\nCynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data,\nourselves: privacy via distributed noise generation. In Advances in Cryptology-EUROCRYPT\n2006, pp. 486-503. Springer, 2006a.\n\nCynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity\nin private data analysis. In Theory of Cryptography, pp. 265-284. Springer, 2006b.\n\nCynthia Dwork, Guy N Rothblum, and Salil Vadhan. Boosting and differential privacy. In Pro-\nceedings of the 51st IEEE Symposium on Foundations of Computer Science, pp. 51-60. IEEE,\n2010.\n\nUlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. RAPPOR: Randomized aggregatable\nprivacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC Conference on\nComputer and Communications Security. pp. 1054-1067. ACM, 2014.\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXi\npreprint arXiv: 150302531, 2015.\nIgor Kononenko. Machine learning for medical diagnosis: history, state of the art and perspective.\nArtificial Intelligence in medicine, 23(1):89-109, 2001.\nIlya Mironov. Renyi differential privacy. manuscript, 2016.\nJason Poulos and Rafael Valle. Missing data imputation for supervised learning. arXiv preprint\narXiv: 1610.09075, 2016.\nihun Hamm, Paul Cao, and Mikhail Belkin, Learning privately from multiparty data. arXiv preprini\narXiv: 1602 03952. 2016\nfim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.\nImproved techniques for training GANs. arXiv preprint arXiv: 1606.03498, 2016.\nStanley L Warner. Randomized response: A survey technique for eliminating evasive answer bias.\nJournal of the American Statistical Association, 60(309):63-69, 1965.\n1- t\nails aux, dd\u2019) < log((1\u2014 (Gag) + 2exP2w).\nf(z) =(1-2) (73) + 267,\nWe next argue that this function is non-decreasing in (0, =4) under the conditions of the lemma.\nTowards this goal, define\nl-w)! ry\na() = (1-2)(s\u2014) +2e\"\",\"\nLemma|4] Let n be the label score vector for a database d with nj\u00bb > 13 for all j. Then\n24+ 9(nj\u00ab \u2014 75)\nPrIM(@) #7] s us 4exp(y(nj- \u2014 3)\noo\n[ (y+ lae\u2122Hlye dy = \u2014 [ a\n_ ye? dy =\ny=0 Y elzl [@ + yla|je\" dy = 1+ Ie|\ny= 4elel\n(Pua =o J!\napla(lsaux, da\u2019) =) PIM(d) = ol Sia @y =o) Pr[M(@) = 0]!\n3 wf Pr{M(d) = 0\u00b0] y + $2 Pr[M(@) = AM eMa)y Sol ayaa\n\n= PIM) =0'l( Saya) aor] x\n\n=\u00a2 )! [MM (d) = o](e77)!\n<(0-\u00a2)(35) +> Pe\n\n-\u00a2 ! fay\n< (Gay) +gem.\nand observe that f(z) = g(z,z). We can easily verify by differentiation that g(z, w) is increasing\nindividually in z and in w in the range of interest. This implies that f(g\u2019) < f(q) completing the\nproof. Oo\nProof. The probability that nj+ + Lap(+) <njt Lap(+) is equal to the probability that the sum\nof two independent Lap(1) random variables exceeds y(nj\u00ab \u2014 nj). The sum of two independent\nLap(1) variables has the same distribution as the difference of two Gamma(2, 1) random variables.\nRecalling that the Gamma(2, 1) distribution has pdf ze~\u201c, we can compute the pdf of the difference\nvia convolution as\nTn this appendix, we describe approaches that were considered to reduce the number of queries made\nto the teacher ensemble by the student during its training. As pointed out in Sections [3] andj] this\neffort is motivated by the direct impact of querying on the total privacy cost associated with student\ntraining. The first approach is based on distillation, a technique used for knowledge transfer and\nmodel compression (2015). The three other techniques considered were proposed\nin the context of active learning, with the intent of identifying training examples most useful for\nlearning. In Sections[2]andA] we described semi-supervised learning, which yielded the best results.\nThe student models in this appendix differ from those in Sections[2|and[4] which were trained using\nGANs. In contrast, all students in this appendix were learned in a fully supervised fashion from\na subset of public, labeled examples. Thus, the learning goal was to identify the subset of labels\nvielding the best learning performance."}, {"section_index": "6", "section_name": "B.1 TRAINING STUDENTS USING DISTILLATION", "section_text": "Distillation is a knowledge transfer technique introduced as a means of compressing large model:\ninto smaller ones, while retaining their accuracy (Bucilua et al,][2006}[Hinton et al.][2015). This is for\ninstance useful to train models in data centers before deploying compressed variants in phones. Th\u00ab\ntransfer is accomplished by training the smaller model on data that is labeled with probability vector:\nproduced by the first model, which encode the knowledge extracted from training data. Distillatior\nis parameterized by a temperature parameter T', which controls the smoothness of probabilitie:\noutput by the larger model: when produced at small temperatures, the vectors are discrete, wherea:\nat high temperature, all classes are assigned non-negligible values. Distillation is a natural candidate\nto compress the knowledge acquired by the ensemble of teachers, acting as the large model, into \u00ab\nstudent. which is much smaller with n times less trainable parameters compared to the n teachers.\nTo evaluate the applicability of distillation, we consider the ensemble of n = 50 teachers for SVHN.\nTn this experiment, we do not add noise to the vote counts when aggregating the teacher predictions.\nWe compare the accuracy of three student models: the first is a baseline trained with labels obtained\nby plurality, the second and third are trained with distillation at T \u20ac {1,5}. We use the first 10,00\u20ac\nsamples from the test set as unlabeled data. Figure[5] reports the accuracy of the student model on\nthe last 16,032 samples from the test set, which were not accessible to the model during training. Il\nis plotted with respect to the number of samples used to train the student (and hence the number of\nqueries made to the teacher ensemble). Although applying distillation yields classifiers that perform\nmore accurately, the increase in accuracy is too limited to justify the increased privacy cost of re-\nvealing the entire probability vector output by the ensemble instead of simply the class assigned the\nlargest number of votes. Thus, we turn to an investigation of active learning."}, {"section_index": "7", "section_name": "B.2 ACTIVE LEARNING OF THE STUDENT", "section_text": "Active learning is a class of techniques that aims to identify and prioritize points in the student\u2019:\ntraining set that have a high potential to contribute to leaming {Angluin| [1988] [Baum] [T991). If the\nlabel of an input in the student\u2019s training set can be predicted confidently from what we have learnec\nso far by querying the teachers, it is intuitive that querying it is not worth the privacy budget spent\nIn our experiments, we made several attempts before converging to a simpler final formulation.\nSiamese networks: Our first attempt was to train a pair of siamese networks, introduced by|Brom-\nley etal. in the context of one-shot learning and later improved by[Koch]{2015). The siamese\nnetworks take two images as input and return 1 if the images are equal and 0 otherwise. They are\ntwo identical networks trained with shared parameters to force them to produce similar represen.\ntations of the inputs, which are then compared using a distance metric to determine if the image:\nare identical or not. Once the siamese models are trained, we feed them a pair of images where\nthe first is unlabeled and the second labeled. If the unlabeled image is confidently matched with <\nknown labeled image, we can infer the class of the unknown image from the labeled image. In ow\n2xperiments, the siamese networks were able to say whether two images are identical or not, but dic\nnot generalize well: two images of the same class did not receive sufficiently confident matches. We\nalso tried a variant of this approach where we trained the siamese networks to output 1 when the twe\n90\n\n85 yo\no (\n\u00a9 80\n2\n3\no\na 75\na\n\u00a3\n\u2018S\no\n70\na\n\n65 x\u2014x Distilled Vectors\n\n\u00bb\u2014x Labels only\nx\u2014< Distilled Vectors at T=5\n60\n0 2000 4000 6000 8000 100\u00a2\nStudent share of samples in SVHN test set (out of 26032)\nimages are of the same class and 0 otherwise, but the learning task proved too complicated to be a\neffective means for reducing the number of queries made to teachers.\nCollection of binary experts: Our second attempt was to train a collection of binary experts, one\nper class. An expert for class 7 is trained to output 1 if the sample is in class 7 and 0 otherwise\nWe first trained the binary experts by making an initial batch of queries to the teachers. Using\nthe experts, we then selected available unlabeled student training points that had a candidate labe\nscore below 0.9 and at least 4 other experts assigning a score above 0.1. This gave us about 50(\nunconfident points for 1700 initial label queries, After labeling these unconfident points using the\nensemble of teachers, we trained the student. Using binary experts improved the student\u2019s accuracy\nwhen compared to the student trained on arbitrary data with the same number of teacher queries\nThe absolute increases in accuracy were however too limited\u2014between 1.5% and 2.5%.\nIdentifying unconfident points using the student: This last attempt was the simplest yet the mos\neffective. Instead of using binary experts to identify student training points that should be labeled\nthe teachers, we used the student itself. We asked the student to make predictions on each unlabele:\ntraining point available. We then sorted these samples by increasing values of the maximum proba\nbility assigned to a class for each sample. We queried the teachers to label these unconfident input\nfirst and trained the student again on this larger labeled training set. This improved the accuracy o\nthe student when compared to the student trained on arbitrary data. For the same number of teache\nqueries, the absolute increases in accuracy of the student trained on unconfident inputs first whe\ncompared to the student trained on arbitrary data were in the order of 4% \u2014 10%.\nFigure 5: Influence of distillation on the accuracy of the SVHN student trained with respect to the\ninitial number of training samples available to the student. The student is learning from n = 50\nteachers, whose predictions are aggregated without noise: in case where only the label is returned,\nwe use plurality, and in case a probability vector is returned, we sum the probability vectors output\nby each teacher before normalizing the resulting vector.\nC APPENDIX: ADDITIONAL EXPERIMENTS ON THE UCI ADULT AND\nDIABETES DATASETS\nUCI Adult dataset: The UCI Adult dataset is made up of census data, and the task is to predict\nwhen individuals make over $50k per year. Each input consists of 13 features (which include the age,\nworkplace, education, occupation\u2014see the UCI website for a full list. The only pre-processing we\napply to these features is to map all categorical features to numerical values by assigning an integei\nvalue to each possible category. The model is a random forest provided by the scikit-learn\nPython package. When training both our teachers and student, we keep all the default paramete:\nvalues, except for the number of estimators, which we set to 100. The data is split between a\ntraining set of 32,562 examples, and a test set of 16,282 inputs.\nUCI Diabetes dataset: The UCI Diabetes dataset includes de-identified records of diabetic patients\nand corresponding hospital outcomes, which we use to predict whether diabetic patients were read-\nmitted less than 30 days after their hospital release. To the best of our knowledge, no particular\nclassification task is considered to be a standard benchmark for this dataset. Even so, it is valuable\nto consider whether our approach is applicable to the likely classification tasks, such as readmission,\nsince this dataset is collected in a medical environment\u2014a setting where privacy concerns arise\nfrequently. We select a subset of 18 input features from the 55 available in the dataset (to avoid\nfeatures with missing values) and form a dataset balanced between the two output classes (see the\nUCI website for more detail Tn class 0, we include all patients that were readmitted in a 30-day\nwindow, while class 1 includes all patients that were readmitted after 30 days or never readmitted at\nall. Our balanced dataset contains 34,104 training samples and 12,702 evaluation samples. We use\na random forest model identical to the one described above in the presentation of the Adult dataset.\nExperimental results: We apply our approach described in Section[2] For both datasets, we trail\nensembles of nm = 250 random forests on partitions of the training data. We then use the nois:\naggregation mechanism, where vote counts are perturbed with Laplacian noise of scale 0.05 t\nprivately label the first 500 test set inputs. We train the student random forest on these 500 test se\ninputs and evaluate it on the last 11,282 test set inputs for the Adult dataset, and 6,352 test set input:\nfor the Diabetes dataset. These numbers deliberately leave out some of the test set, which allowec\nus to observe how the student performance-privacy trade-off was impacted by varying the numbe\nof private labels, as well as the Laplacian scale used when computing these labels.\nFor the Adult dataset, we find that our student model achieves an 83% accuracy for an (\u20ac,5) =\n(2.66, 10-5) differential privacy bound. Our non-private model on the dataset achieves 85% accu-\nracy, which is comparable to the state-of-the-art accuracy of 86% on this dataset\n(2016). For the Diabetes dataset, we find that our privacy-preserving student model achieves <\n93.94% accuracy for a (e,6) = (1.44, 1075) differential privacy bound. Our non-private mode:\non the dataset achieves 93.81% accuracy.\nIn order to further demonstrate the general applicability of our approach, we performed experiments\non two additional datasets. While our experiments on MNIST and SVHN in Section A]used con-\nvolutional neural networks and GANs, here we use random forests to train our teacher and student\nmodels for both of the datasets. Our new results on these datasets show that, despite the differing\ndata types and architectures, we are able to provide meaningful privacy guarantees."}]
SyOvg6jxx
[{"section_index": "0", "section_name": "A Stupy oF CountT-BASED EXPLORATION\nFOR DEEP REINFORCEMENT LEARNING", "section_text": "Haoran Tang!*, Rein Houthooft\u00ae**, Davis Foote\u201d, Adam Stooke\u201d, Xi Chen\u201c,\nYan Duan\u2018, John Schulman\u2019, Filip De Turck?, Pieter Abbeel 24\nCount-based exploration algorithms are known to perform near-optimally when\nused in conjunction with tabular reinforcement learning (RL) methods for solving\nsmall discrete Markov decision processes (MDPs). It is generally thought that\ncount-based methods cannot be applied in high-dimensional state spaces, since\nmost states will only occur once. Recent deep RL exploration strategies are able to\ndeal with high-dimensional continuous state spaces through complex heuristics\noften relying on optimism in the face of uncertainty or intrinsic motivation. In\nthis work, we describe a surprising finding: a simple generalization of the classic\ncount-based approach can reach near state-of-the-art performance on various high.\ndimensional and/or continuous deep RL benchmarks, States are mapped to hash\ncodes, which allows to count their occurrences with a hash table. These counts\nare then used to compute a reward bonus according to the classic count-based\nexploration theory. We find that simple hash functions can achieve surprisingly good\nresults on many challenging tasks. Furthermore, we show that a domain-dependent\nlearned hash code may further improve these results. Detailed analysis reveals\nimportant aspects of a good hash function: 1) having appropriate granularity and\n2) encoding information relevant to solving the MDP. This exploration strategy\nachieves near state-of-the-art performance on both continuous control tasks and\nAtari 2600 games, hence providing a simple yet powerful baseline for solving\nMDPs that require considerable exploration."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Reinforcement learning (RL) studies an agent acting in an initially unknown environment, learning\nthrough trial and error to maximize rewards. It is impossible for the agent to act near-optimally unti\nit has sufficiently explored the environment and identified all of the opportunities for high reward, it\nall scenarios. A core challenge in RL is how to balance exploration\u2014actively seeking out novel state:\nand actions that might yield high rewards and lead to long-term gains; and exploitation\u2014maximizing\nshort-term rewards using the agent\u2019s current knowledge. While there are exploration technique:\nfor finite MDPs that enjoy theoretical guarantees, there are no fully satisfying techniques for high\ndimensional state spaces; therefore, developing more general and robust exploration techniques is at\nactive area of research.\nMost of the recent state-of-the-art RL results have been obtained using simple exploration strategies\nsuch as uniform sampling (Mnih et al] and i.i.d/correlated Gaussian noise\n(2015). Although these heuristics are sufficient in tasks with well-shaped\nrewards, the sample complexity can grow exponentially (with state space size) in tasks with sparse\nrewards [2016b). Recently developed exploration strategies for deep RL have led\nto significantly improved performance on environments with sparse rewards. Bootstrapped DQN\n*These authors contributed equally."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "led to faster learning in a range of Atari 2600 games by training an ensemble of\nQ-functions. Intrinsic motivation methods using pseudo-counts achieve state-of-the-art performance\non Montezuma\u2019s Revenge, an extremely challenging Atari 2600 game {Bellemare et al]\nVariational Information Maximizing Exploration (VIME, (2016}) encourages the\nagent to explore by acquiring information about environment dynamics, and performs well on various\nrobotic locomotion problems with sparse rewards. However, we have not seen a very simple and fast\nmethod that can work across different domains.\nOULUC UL LUG UiaSdIL, LISULCULALy~JUSULUCU CAPLULAUUL LCVD a6 VaDeU Ul LUULILIY Stato~aLuUl\nvisitations, and turning this count into a bonus reward. In the bandit setting, the well-known UCB\nalgorithm of|Lai & Robbins|(1985) chooses the action a; at time \u00a2 that maximizes 7(a;) +, | rose\nwhere #(a;) is the estimated reward, and n(a;) is the number of times action a, was previously chosen\nIn the MDP setting, some of the algorithms have similar structure, for example, Model Based Interval\nEstimation-Exploration Bonus (MBIE-EB) of/Strehl & Littman}({2008) counts state-action pairs with\na table n(s, a) and adding a bonus reward of the form to encourage exploring less visited pairs\nKolter & Ng}(2009) show that the inverse-square-root dependence is optimal. MBIE and related\nalgorithms assume that the augmented MDP is solved analytically at each timestep, which is only\npractical for small finite state spaces.\nThis paper presents a simple approach for exploration, which extends classic counting-based method\nto high-dimensional, continuous state spaces. We discretize the state space with a hash function anc\napply a bonus based on the state-visitation count. The hash function can be chosen to appropriatek\nbalance generalization across states, and distinguishing between states. We select problems from rllal\nand Atari 2600 featuring sparse rewards, and demonstrat:\nnear state-of-the-art performance on several games known to be hard for naive exploration strategie:\nThe main strength of the presented approach is that it is fast, flexible and complementary to mos\nexisting RL algorithms.\nInsummary, this paper proposes a generalization of classic count-based exploration to high-dimensional\nspaces through hashing (Section[2}; demonstrates its effectiveness on challenging deep RL benchmark\nproblems and analyzes key components of well-designed hash functions (Section[3}."}, {"section_index": "3", "section_name": "2.1 NoTATION", "section_text": "This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by\n(S,A,P.1r, p0,.y,T), in which S is the state space, A the action space, P a transition proba.\nbility distribution, r: S x A > Rzo a reward function, po an initial state distribution, y \u20ac (0, 1] a\ndiscount factor, and T the horizon. The goal of RL is to maximize the total expected discounted\nreward Ex, p [xt Y rsp a;)| over a policy 2, which outputs a distribution over actions given a state\nOur approach discretizes the state space with a hash function \u00a2: S \u2014 Z. An exploration bonus is\nadded to the reward function, defined as\n: B\n(sa) = fo,\nTSO GO)\nwhere \u00a3 \u20ac Rzo is the bonus coefficient. Initially the counts n(-) are set to zero for the whole range of\n\u00a2. For every state s; encountered at time step \u00a2, n(#(s;)) is increased by one. The agent is trained\nwith rewards (r + r*), while performance is evaluated as the sum of rewards without bonuses.\nNote that our approach is a departure from count-based exploration methods such as MBIE-EB since\nwe use a state-space count n(s) rather than a state-action count n(s, a). State-action counts n(s, a)\nare investigated in Appendix[A.6] but no significant performance gains over state counting could be\nwitnessed.\nAlgorithm 1: Count-based exploration through static hashing\nJMBULIU 2s WCUUUL-UadCU CAYLULAUUM WULUU 2 SLUG aS\n1 Define state preprocessor g : S > RX\n2 (In case of SimHash) Initialize A \u00a2 R*** with entries drawn i.i.d. from the standard Gaussian\ndistribution (0, 1)\n3 Initialize a hash table with values n(-) = 0\n4 for each iteration j do\n5 Collect a set of state-action samples {(s;, am)}_9 with policy z\n6 Compute hash codes through any LSH method, e.g., for SimHash, $(sy.) = sgn(Ag(sm))\n7 Update the hash table counts Yin: 0 < m < M as n(d(5m)) \u2014 2(6(5m)) +1\nM\n8 Update the policy m using rewards {rm Qn) + \u00e9s} with any RL algorithm\nAlgorithm[i]summarizes our method. The main idea is to use locality-sensitive hashing (LSH) to\nconvert continuous, high-dimensional data to discrete hash codes. LSH is a popular class of hash\nfunctions for querying nearest neighbors based on certain similarity metrics\nA computationally efficient type of LSH is SimHash (Charikar|[2002), which measures similarity by\nangular distance. SimHash retrieves a binary code of state s \u20ac S as"}, {"section_index": "4", "section_name": "2.3. Count-BAsED ExPLORATION VIA LEARNED HASHING", "section_text": "When the MDP states have a complex structure, as is the case with image observations, measurin;\ntheir similarity directly in pixel space fails to provide the semantic similarity measure one woul\ndesire. Previous work in computer vision [2016\nintroduce manually designed feature representations of images that are suitable for semantic task:\nincluding detection and classification. More recent methods learn complex features directly from dat:\nby training convolutional neural networks (Krizhevsky et al.|[2012}[Simonyan & Zisserman|[2014][Hi\nfet al|[2015}. Considering these results, it may be difficult for SimHash to cluster states appropriate\nusing only raw pixels.\n\\ ~ downsample @ \\ \\\nA RA \\ KA \\A \\\\\nKA KD Kt re\nAs Li Nixa] | Geal]] Be\na \\\nBO)\n96x5%5 YY 512 96x5%x5\n96x 1X1 Wo 96 x 10x 10\n96x 24.424 1024 2400 96x24 x24\n1x52x52 1x 52x52 64x 52x52\n\\ ~ downsample @ \\ \\\nMV \\ KA \\ KA \\A \\\\\nRY KD s\\ RN re\nAs Li Nixa] | Geal]] Be\nD |\nBO)\n96x5%x5 VY 512 96x5%x5\n96X IX Wo 96 x 10x 10\n96x 24 x24 1024 2400 96 x24 x24\n1x52x52 1x 52x52 645252\nFigure 1: The autoencoder (AE) architecture; the solid block represents the dense sigmoidal binary\ncode layer, after which noise U(\u2014a, a) is injected.\nClearly the performance of this method will strongly depend on the choice of hash function \u00a2. One\nimportant choice we can make regards the granularity of the discretization: we would like for \u201cdistant\u201d\nstates to be be counted separately while \u201csimilar\u201d states are merged. If desired, we can incorporate\nprior knowledge into the choice of \u00a2, if there would be a set of salient state features which are known\nto be relevant.\n#(s) = sgn(Ag(s)) \u20ac {-1, 1),\nwhere g : S > R\u00ae\u00a2 is an optional preprocessing function and A is a k x d matrix with i.id. entries\ndrawn from a standard Gaussian distribution NV (0, 1). The value for & controls the granularity: higher\nvalues lead to fewer collisions and are thus more likely to distinguish states.\nTherefore, we propose to use an autoencoder (AE) consisting of convolutional, dense, and transposed\nconvolutional layers to learn meaningful hash codes in one of its hidden layers. This AE takes as\ninput states s and contains one special dense layer comprised of K saturating activation functions,\nAlgorithm 2: Count-based exploration using learned hash codes\n1 Define state preprocessor g : S \u2014 B\u00a5 as the binary code resulting from the autoencoder (AE)\n2 Initialize A \u00a2 R*** with entries drawn i.id. from the standard Gaussian distribution N (0, 1)\n3 Initialize a hash table with values n(.) = 0\n4 for each iteration j do\n5 Collect a set of state-action samples {(sin, am) 9 with policy\n6 Add the state samples {sin}Hy to a FIFO replay pool R\n7 if j mod jupdate = 0 then\n8 Update the AE loss function in Eq. (3) using samples drawn from the replay pool\n\n| {sn} 1 ~ 8, for example using stochastic gradient descent\n9 Compute g(sy,) = Lb(s,)], the K-dim rounded hash code for s,, learned by the AE\n10 Project g(s,,) to a lower dimension k via SimHash as $(s,) = sgn(Ag(sn))\ni Update the hash table counts Vm : 0 < m < Mas n(\u00a2(sm)) \u2014 n(O(sm)) +1\n\nM\n\n2 Update the policy 7 using rewards {rom Qm) + es}. with any RL algorithm\nmore specifically sigmoid functions. By rounding the sigmoid output b(s) of this layer to the closes\nbinary number, any state s can be binarized.\nSince gradients cannot be back-propagated through a rounding function, an alternative method must\nbe used to ensure that distinct states are mapped to distinct binary codes. Therefore, uniform noise\nU(-a, a) is added to the sigmoid output. By choosing uniform noise with a sufficiently high variance.\nthe AE is only capable of reconstructing distinct inputs s if its hidden dense layer outputs values b(s)\nthat are sufficiently far apart from each other (Gregor et al.|[2016). Feeding a state s to the AE input,\nextracting b(s) and rounding it to | b(s)] yields a learned binary code. As such, the loss function L(-)\nover a set of collected states {si} 1 is defined as\nL(t) = Sf A> in (1 = Belsn))?s balsa)?\n(Calne) = ay Dy oaptsn) ~ ae Dymin{( = bin)? Bilsn)?} |.\nThis objective function consists of a cross-entropy term and a term that pressures the binary code laye\n\u2018o take on binary values, scaled by 2 \u20ac Ryo. The reasoning behind this is that uniform noise U(\u2014a, a\u2019\nilone is insufficient, in case the AE does not use a particular sigmoid unit. This term ensures that ar\ninused binary code output is assigned an arbitrary binary value. When omitting this term, the code i:\nnore prone to oscillations, causing unwanted bit flips, and destabilizing the counting process.\nOne the one hand, it is important that the mapping from state to code needs to remain relatively\nconsistent over time, which is nontrivial as the AE is constantly updated according to the latest datz\n(Algorithm [2]line[8). An obvious solution would be to significantly downsample the binary code to <\nvery low dimension, or by slowing down the training process. But on the other hand, the code has tc\nremain relatively unique for states that are both distinct and close together on the image manifold\nThis is tackled both by the second term in Eq. (3) and by the saturating behavior of the sigmoid units\nAs such, states that are already well represented in the AE hidden layers tend to saturate the sigmoic\nunits, causing the resulting loss gradients to be close to zero and making the code less prone to change\nIn order to make the AE train sufficiently fast\u2014which is required since it is updated during the agent\u2019s\ntraining\u2014we make use of a pixel-wise softmax output layer that shares\nweights between all pixels. The different softmax outputs merge together pixel intensities into discrete\nbins. The architectural details are described in Appendix[A-IJand are depicted in Figure[]] Because\nthe code dimension often needs to be large in order to correctly reconstruct the input, we apply a\ndownsampling procedure to the resulting binary code | b(s)], which can be done through random\nprojection to a lower-dimensional space via SimHash as in Eq. (2).\nTo answer question 1, we run the proposed method on deep RL benchmarks (rllab and ALE) that\nfeature sparse rewards, and compate it to other state-of-the-art algorithms. Question 2 is answered by\ntrying out different image preprocessors on Atari 2600 games. Finally, we investigate question 3 in\nSectionB.3]and3.4| Trust Region Policy Optimization (TRPO, [Schulman et al.]{2015)) is chosen\nas the RL algorithm for all experiments, because it can handle both discrete and continuous action\nspaces, it can conveniently ensure stable improvement in the policy performance, and is relatively\ninsensitive to hyperparameter changes. The hyperparameters settings are reported in Appendix[A.1"}, {"section_index": "5", "section_name": "3.1 Continuous ConTROL", "section_text": "The rllab benchmark consists of various control tasks to test deep RL algorithm:\nWe selected several variants of the basic and locomotion tasks that use sparse rewards, as show\nin Figure[2] and adopt the experimental setup as defined in (Houthooft et al] [2016}\u2014a descriptior\ncan be found in Appendix[A.2] These tasks are all highly difficult to solve with naive exploratioi\nstrategies, such as adding Gaussian noise to the actions.\nFigure 2: Illustrations of the rllab tasks used in the continuous control experiments, namely\nMountainCar, CartPoleSwingup, SimmerGather, and HalfCheetah; taken from (Duan et al] 2016).\nBe TT mee mre eer gamez ep oepe ny Beep\n\noR oe he BI \u2014\n\nOTT af gg tt\n\n\u201coo W TT gaff pees gal a) aki} HSL pgfan est af\n\n\u201coH Lehn ae pee AL TEER pepe eee\n(a) MountainCar (b) CartPoleS wingup (c) SwimmerGather (d) HalfCheetah\nFigure 3: Mean average return of different algorithms on rllab tasks with sparse rewards; the solid\nline represents the mean average return, while the shaded area represents one standard deviation, over\n5 seeds for the baseline and SimHash.\nFigure[3|shows the results of TRPO (baseline), TRPO-SimHash, and VIME\non the classic tasks MountainCar and CartPoleSwingup, the locomotion task HalfCheetah, and the\nhierarchical task SwimmerGather. Using count-based exploration with hashing is capable of reaching\nthe goal in all environments (which corresponds to a nonzero return), while baseline TRPO with\nGaussian control noise fails completely. Although TRPO-SimHash picks up the sparse reward on\nHalfCheetah, it does not perform as well as VIME. In contrast, the performance of SimHash is\ncomparable with VIME on MountainCar, while it outperforms VIME on SwimmerGather."}, {"section_index": "6", "section_name": "3.2 ARCADE LEARNING ENVIRONMENT", "section_text": "The Arcade Learning Environment (ALE, |Bellemare et al.](2012)), which consists of Atari 2600\nvideo games, is an important benchmark for deep RL due to its high-dimensional state space and wide\n1. Can count-based exploration through hashing improve performance significantly across\ndifferent domains? How does the proposed method compare to the current state of the art in\nexploration for deep RL?\n\n2. What is the impact of learned or static state preprocessing on the overall performance when\nimage observations are used?\n\n3. What factors contribute to good performance, e.g., what is the appropriate level of granularity\nof the hash function?\nAA\nvariety of games. In order to demonstrate the effectiveness of the proposed exploration strategy, six\ngames are selected featuring long horizons while requiring significant exploration: Freeway, Frostbite\nGravitar, Montezuma\u2019s Revenge, Solaris, and Venture. The agent is trained for 500 iterations in all\nexperiments, with each iteration consisting of 0.1 M steps (the TRPO batch size, corresponds to 0.4M\nframes). Policies and value functions are neural networks with identical architectures to (Mnih et al.\n(2016). Although the policy and baseline take into account the previous four frames, the counting\nalgorithm only looks at the latest frame.\nBASS To compare with the autoencoder-based learned hash code, we propose using Basic Abstrac-\ntion of the ScreenShots (BASS, also called Basic; see[Bellemare et al,|{2012}) as a static preprocessing\nfunction g. BASS is a hand-designed feature transformation for images in Atari 2600 games. BASS\nbuilds on the following observations specific to Atari: 1) the game screen has a low resolution, 2)\nmost objects are large and monochrome, and 3) winning depends mostly on knowing object locations\nand motions. We designed an adapted version of BAST} that divides the RGB screen into square\ncells, computes the average intensity of each color channel inside a cell, and assigns the resulting\nvalues to bins that uniformly partition the intensity range [0,255]. Mathematically, let C be the cell\nsize (width and height), B the number of bins, (i, j) cell location, (x, y) pixel location, and z the\nchannel.\nfeature(i, j,z) = | soe Leeye ce) 106\u00bb; 2) .\nAfterwards, the resulting integer-valued feature tensor is converted to an integer hash code ((s;) in\nLine[bjof Algorithm[]p. A BASS feature can be regarded as a miniature that efficiently encodes object\nlocations, but remains invariant to negligible object motions. It is easy to implement and introduces\nlittle computation overhead. However, it is designed for generic Atari game images and may nol\ncapture the structure of each specific game very well.\nTable 1: Atari 2600: average total reward after training for 50M time steps. Boldface numbers\nindicate best results. Italic numbers are the best among our methods.\nFreeway Frostbite! Gravitar Montezuma Solaris Venture\nTRPO (baseline) 16.5 2869 486 0 2758 121\n\u201cTRPO-pixel-SimHash 31.6 4683\u2014=\u2014(ti BCT\nTRPO-BASS-SimHash 28.4 3150 604 238 1201 616\nTRPO-AE-SimHash 33.5 5214 482 75 4467 445\nDouble-DQN 33.3 1683 412 0 3068 98.0\n\u201cDueling network ==\u201c \u201c(si .0siTD\u2014 (<sSBBOC ti OT\n\u201cGorilla LT 605 1084 NA 24\n\u201cDONPop-Art i334 3469117\n\u201cASCH \u201d\u201d~<\u201c\u2014~iT OT (st SC (i\u2018sa:SCSCIS\npseudo-count\u201d 29.2 1450 - 3439 - 369\n1 While]Vezhnevets et al. (2016} reported best score 8108, their evaluation was based on top 5 agents trained\nwith 500M time steps, hence not comparable.\n2 Results reported only for 25M time steps (100 M frames).\nWe compare our results to double DON {van Hasselt et al] [2016b}, dueling network {Wang et al]\n[2016}, A3C+ (Bellemare et al.|[2016), double DQN with pseudo-counts (Bellemare et al.][2016).\nGorila (Nair et al ][2015), and DQN Pop-Art on the \u201cnull op\u201d metriq?] We\nshow training curves in Figure[4[and summarize all results in Table 1. Surprisingly, TRPO-pixel-\nSimHash already outperforms the baseline by a large margin and beats the previous best result on\nFrostbite. TRPO-BASS-SimHash achieves significant improvement over TRPO-pixel-SimHash on\n1The original BASS exploits the fact that at most 128 colors can appear on the screen, Our adapted versio)\ndoes not make this assumption.\n2T ha nant tals na antinn fawn nandam nombar Grithin IN nf fenman at tha hacinnina nf anch anianda\nMontezuma\u2019s Revenge and Venture, where it captures object locations better than other methods}?\nTRPO-AE-SimHash achieves near state-of-the-art performance on Freeway, Frostbite and Solaris|4\nLed sb -BASS-SimHas i\nwl ] i 4 Py > po *\u00b0)|= Taropmetsintem| ill\npe , Ee Saas eget = | T : pune\nMf \u2014 ee 2 ee aed 500 i ick aasaadnle |\n~ : Loan as nee ald ene\n\u2014as\u2014sk$ te te LE on ERR peta\noo reeway we se0 Fata a\n1 : . . . (b) Frostbite se\nLg uni app (\u00a9) Gravitar\naf at) in inal | ul ae\n4 \u2018 Ny 09) + A atid) Hi\n+ L etl 7 ae shakin tctaGtibsrreei nN Lonel alates\n4 if et er EET A Cul mo ath We\na Fo 72000h Kae een\n(4) Montezuma\u2019s Revenge iss \u2014s se \u2014sho 4 a\ngt . 10 209 he serene\n(e) Solaris \u201d \u201c 500\n(f) Venture\nAs observed in Table 1, preprocessing images with BASS or using a learned hash code through the\nAE leads to much better performance on Gravitar, Montezuma\u2019s Revenge and Venture. Therefore, an\nstatic or adaptive preprocessing step can be important for a good hash function.\nIn conclusion, our count-based exploration method is able to achieve remarkable performance\ngains even with simple hash functions like SimHash on the raw pixel space. If coupled with\ndomain-dependent state preprocessing techniques, it can sometimes achieve far better results."}, {"section_index": "7", "section_name": "3.3. GRANULARITY", "section_text": "While our proposed method is able to achieve remarkable results without requiring much tunin;\nhe granularity of the hash function should be chosen wisely. Granularity plays a critical role i\n>ount-based exploration, where the hash function should cluster states without under-generalizin\nor over-generalizing. Table [2] summarizes granularity parameters for our hash functions. In Table|\nve summarize the performance of TRPO-pixel-SimHash under different granularities. We choos\nFrostbite and Venture on which TRPO-pixel-SimHash outperforms the baseline, and choose as rewar\nonus coefficient 8 = 0.01 x 256 to keep average bonus rewards at approximately the same scal\n\u00a2 = 16 only corresponds to 65536 distinct hash codes, which is insufficient to distinguish betwee\nsemantically distinct states and hence leads to worse performance. We observed that k = 512 tend\n\u20180 capture trivial image details in Frostbite, leading the agent to believe that every state is new an\n2qually worth exploring. Similar results are observed while tuning the granularity parameters fc\n[RPO-BASS-SimHash and TRPO-AE-SimHash.\nThe best granularity depends on both the hash function and the MDP. While adjusting granularity\nparameter, we observed that it is important to lower the bonus coefficient as granularity is increased\nThis is because a higher granularity is likely to cause lower state counts, leading to higher bonus\nrewards that may overwhelm the true rewards.\n3We provide videos of example game play and visualizations of the difference bewteen Pixel-SimHash and\nBASS Simla afigpeTww yout conan t-PA UNRGFEEGANW VENT pT TSS\n\n4Note that some design choices in other algorithms also impact exploration, such as 6-greedy and entropy\nregularization. Nevertheless, it is still valuable to position our results within the current literature.\nFigure 4: Atari 2600 games: the solid line is the mean average undiscounted return per iteration,\nwhile the shaded areas represent the one standard deviation, over 5 seeds for the baseline, TRPO-\npixel-SimHash, and TRPO-BASS-SimHash, while over 3 seeds for TRPO-AE-SimHash.\nTable 2: Granularity parameters of various hash functions\nTable 3: Average score at 50M time steps\nachieved by TRPO-pixel-SimHash\nk 16 64 128 256 512\nFrostbite 3326 4029 3932 4683 1117\nVenture 0 218 142 263 306\nMontezuma's Revenge is widely known for its extremely sparse rewards and difficult exploration\n(Bellemare et al [2016). While our method does not outperform[Bellemare et al.] (2016) on this game\nwe investigate the reasons behind this through various experiments. The experiment process below\nagain demonstrates the importance of a hash function having the correct granularity and encoding\nrelevant information for solving the MDP.\nOur first attempt is to use game RAM states instead of image observations as inputs to the policy\n(details in Appendix[A-Ip, which leads to a game score of 2500 with TRPO-BASS-SimHash. Oui\nsecond attempt is to manually design a hash function that incorporates domain knowledge, callec\nSmartHash, which uses an integer-valued vector consisting of the agent\u2019s (x, y) location, room numbe1\nand other useful RAM information as the hash code (details in Appendix[A.3p. The best SmartHast\nagent is able to obtain a score of 3500. Still the performance is not optimal. We observe that a slight\nchange in the agent\u2019s coordinates does not always result in a semantically distinct state, and thus the\nhash code may remain unchanged. Therefore we choose grid size s and replace the x coordinate by\nL(x \u2014 Xmin)/s] (similarly for y). The bonus coefficient is chosen as 8 = 0.01+/s to maintain the scale\nrelative to the true rewarq?] (see Table[A}. Finally, the best agent is able to obtain 6600 total rewards\nafter training for 1000 iterations (1000 M time steps), with a grid size s = 10.\n8000 1 T . 1 7\n\n= exact enemy locations \u2014eananpnpeantinty\n\nmt = ignore enemies : aE\" =\n\nsoco}| == random enemy locations} L tt i : |\n\ni Ll re a a AN DE\n\nee yi A en {\n\nme PF mkt Ow Ge LAP\n3000 : of ae tN ages ; \u2019\n\n2000 i Lt, i dria t , Ply ME {\n\n1000 LE ee en fl. we Lercannermpmegern\n\nwn ley \u201ca ee\n\nan ree TE h i. =\n\n0 trey olen coed\n\n- 10005 200 400 600 800 100C\nFigure 5: SmartHash results on Montezuma\u2019s Revenge (RAM observations): the solid line is the\nmean average undiscounted return per iteration, while the shaded areas represent the one standarc\ndeviation, over 5 seeds.\nDuring our pursuit, we had another interesting discovery that the ideal hash function should not\nsimply cluster states by their visual similarity, but instead by their relevance to solving the MDP. We\n>The bonus scaling is chosen by assuming all states are visited uniformly and the average bonus reward\nshould remain the same for any grid size.\nTable 4: Average score at 50M time steps\nachieved by TRPO-SmartHash on Montezuma\u2019s\nRevenge (RAM observations)\ngooal| \u2014_ &xact enemy locations i i cea cae quboa ana ota\n|| \u2018snore enemies io wi\n\nsovo|| == random enemy lecations| : yee | : |\n\nall Mal a a\n\n4000 L wt us aug al je wn Vv |\n\nye ODP yeah igen pst AM AO\n\nvocal en potmeggelarnyt W MeN |\n\n1000 LE Pen fe we LeCreaeemspeecnn|\n\n= eg \u201ca teaiecieee \u201c4\n\na a\n\n9 : : trey vet Na 7\n\n10005 200 400 600 2800 1000\nexperimented with including enemy locations in the first two rooms into SmartHash (s = 10), an\nobserved that average score dropped to 1672 (at iteration 1000). Though it is important for the agen\nto dodge enemies, the agent also erroneously \u201cenjoys\u201d watching enemy motions at distance (sinc:\nnew states are constantly observed) and \u201cforgets\u201d that his main objective is to enter other rooms. Al\nalternative hash function keeps the same entry \u201cenemy locations\u201d, but instead only puts random\nsampled values in it, which surprisingly achieves better performance (3112). However, by ignorin;\nenemy locations altogether, the agent achieves a much higher score (5661) (see Figure[5p. In retrospec\nwe examine the hash codes generated by BASS-SimHash and find that codes clearly distinguis]\nbetween visually different states (including various enemy locations), but fails to emphasize that th:\nagent needs to explore different rooms. Again this example showcases the importance of encodin;\nrelevant information in designing hash functions."}, {"section_index": "8", "section_name": "4 RELATED Work", "section_text": "Classic count-based methods such as MBIE (Strehl_& Littman] [2005), MBIE-EB and (Kolter &\n(2009) solve an approximate Bellman equation as an inner loop before the agent takes an actior\n{Strehl_& Littman] [2008). As such, bonus rewards are propagated immediately throughout the\nstate-action space. In contrast, contemporary deep RL algorithms propagate the bonus signal based or\nrollouts collected from interacting with environments, with value-based or policy\ngradient-based methods, at limited speed. In addition, ow\nproposed method is intended to work with contemporary deep RL algorithms, it differs from classica\u2019\ncount-based method in that our method relies on visiting unseen states first, before the bonus rewarc\ncan be assigned, making uninformed exploration strategies still a necessity at the beginning. Filling\nthe gaps between our method and classic theories is an important direction of future research.\nAnother type of exploration is curiosity-based exploration. These methods try to capture the agent\u2019s\nsurprise about transition dynamics. As the agent tries to optimize for surprise, it naturally discovers\nnovel states. We refer the reader to[Schmidhuber] 2010) and [Oudeyer & Kaplan] for an\nextensive review on curiosity and intrinsic rewards.\nThe most related exploration strategy is proposed by [Bellemare et al.](2016), in which an exploratior\nbonus is added inversely proportional to the square root of a pseudo-count quantity. A state pseudc\ncount is derived from its log-probability improvement according to a density model over the stat\nspace, which in the limit converges to the empirical count. Our method is similar to pseudo-coun\napproach in the sense that both methods are performing approximate counting to have the necessar\ngeneralization over unseen states. The difference is that a density model has to be designed anc\nlearned to achieve good generalization for pseudo-count whereas in our case generalization is obtainec\nby a wide range of simple hash functions (not necessarily SimHash). Another interesting connectior\nis that our method also implies a density model p(s) = HED over all visited states, where N is thi\ntotal number of states visited. Another method similar to hashing is proposed by[Abel ef al.]{2016]\nwhich clusters states and counts cluster centers instead of the true states, but this method has yet to bi\ntested on standard exploration benchmark problems.\nArelated line of classical exploration methods is based on the idea of optimism in the face of uncertainty\n(Brafman & Tennenholtz||2002) but not restricted to using counting to implement \u201coptimism\u201d, e.g.\nIE-Max (Brafman & Temesholt]2002), UCRL laksch etal] [2010}, and E\u00b0 (Kearns & Singh[3002),\nThese methods, similar to MBIE and MBIE-EB, have theoretical guarantees in tabular settings.\nBayesian RL methods (Kolter & Ng] [2009 2014] 201 1}[Ghavamzadeh et al.\n2015), which keep track of a distribution over MDPs, are an alternative to optimism-based methods.\nExtensions to continuous state space have been proposed by| (2013) and|Osband et al.\n(2016b).\nSeveral exploration strategies for deep RL have been proposed to handle high-dimensional state space\n\nrecently. propose VIME, in which information gain is measured in Bayesian\n\nneural networks modeling the MDP dynamics, which is used an exploration bonus.\npropose to use the prediction error of a learned dynamics model as an exploration bonus.\nThompson sampling through bootstrapping is proposed by[Osband et al] {2016ap, using bootstrapped\n\nQ-functions."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This\nresearch was funded in part by ONR through a PECASE award. Yan Duan was also supported by a\nBerkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a\nBerkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through\ngrant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through\nthe ARC Centre of Excellence for Mathematical and Statistical Frontiers. Adam Stooke gratefully\nacknowledges funding from a Fannie and John Hertz Foundation fellowship. Rein Houthooft is\nsupported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO)."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Marc G Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos.\nUnifying count-based exploration and intrinsic motivation. In Advances in Neural Information\nProcessing Systems, 2016.\nBurton H. Bloom. Space/time trade-offs in hash coding with allowable errors. Communications o;\nthe ACM, 13(7):422-426, 1970.\nMoses S Charikar. Similarity estimation techniques from rounding algorithms. In Proceedings of th\nthirty-fourth annual ACM symposium on Theory of computing, pp. 380-388, 2002.\nThis paper demonstrates that a generalization of classical counting techniques through hashing is able\nto provide an appropriate signal for exploration, even in continuous and/or high-dimensional MDPs\nusing function approximators, resulting in near state-of-the-art performance across benchmarks. It\nprovides a simple yet powerful baseline for solving MDPs that require informed exploration.\nMarc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment:\nAn evaluation platform for general agents. Journal of Artificial Intelligence Research, 2012.\nRonen! Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for near-optimal\nreinforcement learning. Journal of Machine Learning Research, 3:213-231, 2002.\nThomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning\nJournal of Machine Learning Research, 11:1563-1600, 2010.\nMichael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine\nLearning, 49(2-3):209-232, 2002.\nDavid G Lowe. Object recognition from local scale-invariant features. In Computer vision, 1999. The\nproceedings of the seventh IEEE international conference on. volume 2. pp. 1150-1157. Ieee. 1999\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Mare G Bellemare,\nAlex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control\nthrough deep reinforcement learning. Nature, 518(7540):529-533, 2015.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by\nreducing internal covariate shift. In International Conference on Machine Learning (ICML), pp.\n448-456, 2015.\nArun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria,\nVedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al. Massively\nparallel methods for deep reinforcement learning. arXiv preprint arXiv: 1507.04296, 2015.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image\nrecognition. arXiv preprint arXiv: 1409.1556, 2014.\nBradly C Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement\nlearning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.\nAlexander L Strehl and Michael L Littman. An analysis of model-based interval estimation for\nMarkov decision processes. Journal of Computer and System Sciences, 74(8):1309-1331, 2008.\nAaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. It\nInternational Conference on Machine Learning (ICML), 2016.\nHado van Hasselt, Arthur Guez, Matteo Hessel, and David Silver. Learning functions across many\norders of magnitudes. arXiv preprint arXiv:1602.07714, 2016a.\nAlexander Vezhnevets, Volodymyr Mnih, John Agapiou, Simon Osindero, Alex Graves, Oriol Vinyals,\nand Koray Kavukcuoglu. Strategic attentive writer for learning macro-actions. In Advances in\nNeural Information Processing Systems (NIPS), 2016.\nZiyu Wang, Nando de Freitas, and Marc Lanctot. Dueling network architectures for deep reinforcement\nlearning. In International Conference on Machine Learning (ICML), 2016.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel, Trust region\nanaling: nentieninatinn Tes Teed nee ati neal Cnn fbnene nn nen WA a nhiten an Ton cntenn SEOOMAT YY INI\nAlexander L Strehl and Michael L Littman. A theoretical analysis of model-based interval estimation.\nTn International Conference on Machine Learnine (ICM). nn. 856-863. 2005."}, {"section_index": "11", "section_name": "A.l HyperpARAMETER SETTINGS", "section_text": "For the rllab experiments, we used batch size 5000 for all tasks except SwimmerGather, for which w\nused batch size 50000. CartpoleSwingup makes use of a neural network policy with one layer of 3:\ntanh units. The other tasks make use of a two layer neural network policy of 32 tanh units each fo\nMountainCar and HalfCheetah, and of 64 and 32 tanh units for SwimmerGather. The outputs ar\nmodeled by a fully factorized Gaussian distribution NV (4, 07), in which yt is modeled as the networ\noutput, while o is a parameter. CartPoleSwingup makes use of a neural network baseline with on\nlayer of 32 ReLU units, while all other tasks make use of a linear baseline function. For all tasks, w\nused TRPO step size 0.01 and discount factor y = 0.99. We choose SimHash parameter k = 32 an\nbonus coefficient 8 = 0.01, found through a coarse grid search.\nFor Atari experiments, a batch size of 100000 is used, while the KL divergence step size is set tc\n0.01. The policy and baseline both have the following architecture: 2 convolutional layers witk\nrespectively 16 and 32 filters, sizes 8 x 8 and 4 x 4, strides 4 and 2, using no padding, feeding intc\na single hidden layer of 256 units. The nonlinearities are rectified linear units (ReLUs). The inpu!\nframes are downsampled to 52 x 52. The input to policy and baseline consists of the 4 previous\nframes, corresponding to the frame skip of 4. The discount factor was set to y = 0.995. All inputs\nare rescaled to [\u20141, 1] element-wise. All experiments used 5 different training seeds, except the\nexperiments with the learned hash code, which uses 3 different training seeds. Batch normalizatior\nis used at each policy and baseline layer. TRPO-pixel-SimHash uses binary\ncodes of size k = 256; BASS (TRPO-BASS-SimHash) extracts features using cell size C = 20 anc\nB = 20 bins. The autoencoder for the learned embedding (TRPO-AE-SimHash) uses a binary hidder\nlayer of 512 bit, which are projected to 64 bit.\nRAM states in Atari 2600 games are integer-valued vectors over length 128 in the range [0, 255].\nExperiments on Montezuma\u2019s Revenge with RAM observations use a policy consisting of 2 hidden\nlayers, each of size 32. RAM states are rescaled to a range [\u20141, 1]. Unlike images, only the current\nRAM is shown to the agent. Experiment results are averaged over 10 random seeds.\nThe autoencoder used for the learned hash code has a 512 bit binary code layer, using sigmoid units\nto which uniform noise U(\u2014a, a) with a = 0.3 is added. The loss function Eq. \u00ae). using A = 10\nis updated every jupdate = 3 iterations. The architecture looks as follows: an input layer of size\n52 x 52, representing the image luminance is followed by 3 consecutive 6 x 6 convolutional layers\nwith stride 2 and 96 filters feed into a fully connected layer of size 1024, which connects to the binary\ncode layer. This binary code layer feeds into a fully-connected layer of 1024 units, connecting to 4\nfully-connected layer of 2400 units. This layer feeds into 3 consecutive 6 x 6 transposed convolutional\nlayers of which the final one connects to a pixel-wise softmax layer with 64 bins, representing the\npixel intensities. Moreover, label smoothing is applied to the different softmax bins, in which the\nlog-probability of each of the bins is increased by 0.003, before normalizing. The softmax weights\nare shared among each pixel. All output nonlinearities are ReLUs; Adam (Kingma & Bal is\nused as an optimization scheme; batch Taran = Seeeee [2015) is applied to each layer\nThe architecture was shown in Figure[l]of Section|2.3"}, {"section_index": "12", "section_name": "A.2 DESCRIPTION OF THE ADAPTED RLLAB TASKS", "section_text": "This section describes the continuous control environments used in the experiments. The tasks are\nimplemented as described in[Duan et al] (2016), following the sparse reward adaptation of[Houthoof\nfeal}(2oig. The tasks have the following state and action dimensions: CartPoleSwingup, S \u00a2 R\u2019\nAC RB\u2019; MountainCar S c R?, A C R!; HalfCheetah, S C R\u201d, A C R\u00ae; SwimmerGathei\nS CR*, A CR\u2019. Por the sparse reward experiments, the tasks have been modified as follows. Ir\nCartPoleSwingup, the agent receives a reward of +1 when cos(8) > 0.8, with f the pole angle. Ir\nMountainCar, the agent receives a reward of +1 when the goal state is reached, namely escaping\nthe valley from the right side. Therefore, the agent has to figure out how to swing up the pole ir\nthe absence of any initial external rewards. In HalfCheetah, the agent receives a reward of +1 whet\nXbody > 5. As such, it has to figure out how to move forward without any initial external reward. The\ntime horizon is set to T = 500 for all tasks."}, {"section_index": "13", "section_name": "A.3 EXAMPLES OF ATARI 2600 RAM ENTRIES", "section_text": "Table 5: Interpretation of particular RAM entries in Montezuma\u2019s Revenge"}, {"section_index": "14", "section_name": "A4 ANALysis or LEARNED BINARY REPRESENTATION", "section_text": "Figure [6]shows the downsampled codes learned by the autoencoder for several Atari 2600 games\n(Frostbite, Freeway, and Montezuma\u2019s Revenge). Each row depicts 50 consecutive frames (from 0 tc\n49, going from left to right, top to bottom). The pictures in the right column depict the binary code:\nthat correspond with each of these frames (one frame per row). Figure[7]shows the reconstructions o1\nseveral subsequent images according to the autoencoder,\nTable[5]lists the semantic interpretation of certain RAM entries in Montezuma\u2019s Revenge. SmartHash,\nas described in SectionB.4] makes use of RAM indices 3, 42, 43, 27, and 67. \u201cBeam walls\u201d are\ndeadly barriers that occur periodically in some rooms.\nRAM index Group Meaning\n3 room room number\n\n\u201can agent ecoordinate\n43 agent y coordinate\n52 agent orientation (left/right)\n\n\u201coy peamwalls on/off ss\u2014iti\u2018\u201c<\u201c<CSCO\n83 beam walls beam wall countdown (on: 0, off: 36 \u2014 0)\n\n\u201c9 \"counter counts from Oto 255 andrepeas\n55 counter death scene countdown\n\n\"67 objects. existence of objects (doors, skull and key) in the Ist room\n47 skull x coordinate (both Ist and 2nd rooms)\nRaaSRaaaa Spares\nAasdaadaee i ty\nAscasnses (Hi it\nfadeedssaas (rela\nFigure 6: Frostbite, Freeway, and Montezuma\u2019s Revenge: subsequent frames (left) and corresponding\ncode (right); the frames are ordered from left (starting with frame number 0) to right, top to bottom;\nthe vertical axis in the right images correspond to the frame number.\nFigure 7: Freeway: subsequent frames and corresponding code (top); the frames are ordered from left\n(starting with frame number 0) to right, top to bottom; the vertical axis in the right images correspond\nto the frame number. Within each image, the left picture is the input frame, the middle picture the\nreconstruction, and the right picture, the reconstruction error.\nWe experimented with directly building a hashing dictionary with keys @(s) and values the stat\ncounts, but observed an unnecessary increase in computation time. Our implementation converts the\ninteger hash codes into binary numbers and then into the \u201cbytes\u201d type in Python. The hash table is :\ndictionary using those bytes as keys.\nHowever, an alternative technique called Count-Min (ran ALJ can cou a eT witl\na data structure identical to counting Bloom filters (Fan et al.][2000}, can count with a fixed intege\narray and thus reduce computation time. Specifically, let p*,..., p' be distinct large prime number\nand define \u00a2/(s) = \u00a2(s) mod p\u2019. The count of state s is returned as min; < j<l ni (gf (s)). To increas:\nthe count of s, we increment n/ (\u00a2 (s)) by 1 for all 7. Intuitively, the method replaces \u00a2 by weake\nhash functions, while it reduces the probability of over-counting by reporting counts agreed by al\nsuch weaker hash functions. The final hash code is represented as (6l(s), Leo! (s)).\nThroughout all experiments above, the prime numbers for the counting Bloom filter are 999931\n999953, 999959, 999961, 999979, and 999983, which we abbreviate as \u201c6M\u201d. In addition, we\nexperimented with 6 other prime numbers, each approximately 15 M, which we abbreviate as \u201c90 M\u201d\nAs we can see in Figure] counting states with a dictionary or with Bloom filters lead to simila\nperformance, but the computation time of latter is lower. Moreover, there is little difference betweer\ndirect counting and using a very larger table for Bloom filters, as the average bonus rewards are almos!\nthe same, indicating the same degree of exploration-exploitation trade-off. On the other hand, Bloom\nfilters require a fixed table size, which may not be known beforehand.\noe = \u2014\n\nee G | aly 1)\n\n2G At led abl 6\n\nsoll an pe : Ree bls al\n\nJ] ry) yet\n\nll gr st eT\nipememin| | pein inmerscates| |ieresece itenemes| [Emenee\n\u2014a_s ns: cs \u2014\u2014n <a sy\nfiscimmsemsias| | rete ssescomasscast\nEnos) | ES Schon ScRRsSaer| | Eeoeeses\nand define \u00a2/(s) = \u00a2(s) mod p\u2019. The count of state s is returned as min) <;<) n\u2019 (\u00a2/(s)}. To increase\n9000 + r . + i 0.012, ~ r . Y a\n== direct count| alan = direct count|\nmi = Bloom 6M + i pasts [col Bloom 6M i : i\nso00|| == Bloor 90M |... Last Pt eas anaabintle | \u2014 Bloom 90M\ni 5\n5000 : Le se : ( o.c0s| 4 i : 4\nsi: :\n4000 L ie L Ree.\npe MEG 9 0g] NSE ci, cs :\n2000 Ee. a L ! U ccca Sh. Namo,\n\u201cPOF { Se i\n2000-8 L begs ey \u201cSS Pepe kates eed\nopen: Dr grecrou gy onneuonmaan rae A So\nae _ ern\n20005 100 200 300 400 500 0.0005 100 200 300 400 500\n(a) Mean average undiscounted return (b) Average bonus reward\nFigure 8: Statistics of TRPO-pixel-SimHash (k = 256) on Frostbite. Solid lines are the mean, while\nthe shaded areas represent the one standard deviation. Results are derived from 10 random seeds\nDirect counting with a dictionary uses 2.7 times more computations than counting Bloom filters (6M\nor 90M).\nTheory of Bloom Filters Bloom filters are popular for determining whether a date\nsample s\u2019 belongs to a dataset D. Suppose we have / functions @ that independently assign eack\ndata sample to an integer between | and p uniformly at random. Initially 1,2,..., \u00bb are marked as 0\nThen every s \u20ac D is \u201cinserted\u201d through marking \u00a2/(s) as 1 for all j. A new sample s\u2019 is reported as z\nmember of 2 only if \u00a2/(s) are marked as 1 for all j. A bloom filter has zero false negative rate (any\ns \u20ac D is reported a member), while the false positive rate (probability of reporting a nonmember as z\nmember) decays exponentially in /.\nThough Bloom filters support data insertion, it does not allow data deletion. Counting Bloom filter:\n2000) maintain a counter n(-) for each number between 1 and p. Inserting/deleting .\ncorresponds to incrementing/decrementing n(g (s)) by 1 for all 7. Similarly, s is considered :\nmember if Vi : nfai(s)\\ = 0.\nCount-Min sketch is designed to support memory-efficient counting without introducing too many\nover-counts. It maintains a separate count n/ for each hash function \u00a2/ defined as @(s) = $(s)\nmod p/, where p\u2019 is a large prime number. For simplicity, we may assume that p/ ~ p Vj and \u00a2!\nassigns s to any of 1,..., p with uniform probability.\nWe now derive the probability of over-counting. Let s be a fixed data sample (not necessarily\ninserted yet) and suppose a dataset D of N samples are inserted. We assume that p! > N. Let\nn= mini<j< ni (o/ (s)) be the count returned by the Bloom filter. We are interested in computing\nDeakin ~ Ale d DV Dna ta aconmntiane chant Al wa lnaur ni (Ale ~ Rinamial (ar L\\ Tharefara\nProb(n > O|s \u00a2 D). Due to assumptions about \u00a2/, we know n/(\u00a2(s)) ~ Binomial (NV, +). Therefore\nn:= miny<;< 7 (d\u00e9 (s)) be the count returned by the Bloom filter. We are interested in computing\n5 _ Prob(n > 0,5 \u00a2 D)\nrob(n > O|s \u20ac DM) = prob \u00a2D) \u20acD)\n_ Prob(n > 0) \u2014 Prob(s \u20ac D)\n~ Prob(s \u00a2 D)\n_ Prob( > 0)\n~ Prob(s \u00a2 D)\n[a1 Prob(a! (6\"(s)) > 0)\n~ (= 1/p)%\n_ (-(-1/py\u00a5y\n~ (L= Tp)\n(l-eNipy!\n\u201cNip\na (\u2014e Ny,\nIn particular, the probability of over-counting decays exponentially in /. We refer the readers to\n(Cormode & Muthukrishnan] {2005} for other properties of the Count-Min sketch."}, {"section_index": "15", "section_name": "A.6 RoxsusTNEss ANALYSIS", "section_text": "Apart from the experimental results shown in Table[IJand TableB} additional experiments have been\nperformed to study several properties of our algorithm.\nHyperparameter sensitivity To study the performance sensitivity to hyperparameter changes, we\nfocus on evaluating TRPO-RAM-SimHash on the Atari 2600 game Frostbite, where the method has a\nclear advantage over the baseline. Because the final scores can vary between different random seeds,\nwe evaluated each set of hyperparameters with 30 seeds. To reduce computation time and cost, RAM\nstates are used instead of image observations.\nTable 6: TRPO-RAM-SimHash performance robustness to hyperparameter changes on Frostbit\u00ab\nThe results are summarized in Table|6] Herein, k refers to the length of the binary code for hashing\nwhile 6 is the multiplicative coefficient for the reward bonus, as defined in Section [2.2] This\ntable demonstrates that most hyperparameter settings outperform the baseline (8 = 0) significantly\nMoreover, the final scores show a clear pattern in response to changing hyperparameters. Small\n\u00a3-values lead to insufficient exploration, while large B-values cause the bonus rewards to overwhelm\nthe true rewards. With a fixed k, the scores are roughly concave in f, peaking at around 0.2. Higher\ngranularity & leads to better performance. Therefore, it can be concluded that the proposed exploration\nmethod is robust to hyperparameter changes in comparison to the baseline, and that the best parameter\nsettings can obtained from a relatively coarse-grained grid search.\nState and state-action counting Continuing the results in Table[6] the performance of state-action\ncounting is studied using the same experimental setup, summarized in Table [7] In particular, a\nbonus reward rt = wes instead of rt = a is assigned. These results show that the relative\nperformance of state counting compared to state-action counting depends highly on the selected\nhyperparameter settings. However, we notice that the best performance is achieved using state\ncounting with k = 256 and f = 0.2.\nTable 7: Performance comparison between state counting (left of the slash) and state-action counting\n(right of the slash) using TRPO-RAM-SimHash on Frostbite\nk 0.01 0.05 0.1 0.2 0.4 0.8 1.6\n64 879/976 2464/1491 2243/3954 2489/5523 1587/5985 1107/2052 441/742\n128 1475808 4248/4302 2801/4802 3239/7291 3621/4243 1543/1941 395/362\n256 2583/1584 4497/5402 4437/5431 7849/4872 3516/3175 2260/1238 374/96\nB\nk 0 O01 005 O1 02 04 O8 16\n- 397 - = - eee\n64 - 879 2464 2243 2489 1587 1107 441\n128 \u2014 1475 4248 2801 3239 3621 1543 395\n256 - 2583 4497 4437 7849 3516 2260 374\np\nk 0 O01 005 O1 02 04 O8 16\n- 397 - - - - - - =\n64 - 879 2464 2243 2489 1587 1107 441\n128 \u2014 1475 4248 2801 3239 3621 1543 395\n256 - 2583 4497 4437 7849 3516 2260 374"}]
Sk8csP5ex
[{"section_index": "0", "section_name": "THE LOSS SURFACE OF RESIDUAL NETWORKS:\nENSEMBLES & THE ROLE OF BATCH NORMALIZATIOD", "section_text": "Etai Littwin & Lior Wolf"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Residual Networks (He et al.||2015) (ResNets) are neural networks with skip connections. Thes\nnetworks, which are a specific case of Highway Networks (Srivastava et al.| {2015), present stat\n\nof the art results in the most competitive computer vision tasks including image classification anc\nobject detection.\nOur analysis reveals the mechanism for this dynamic behavior and explains the driving force behind\nit. This mechanism remarkably takes place within the parameters of Batch Normalization\n\n2015), which is mostly considered as a normalization and a fine-grained whitening\n\nmechanism that addresses the problem of internal covariate shift and allows for faster learning rates.\nWe show that the scaling introduced by batch normalization determines the depth distribution in the\nvirtual ensemble of the ResNet. These scales dynamically grow as training progresses, shifting the\neffective ensemble distribution to bigger depths.\nThe main tool we employ in our analysis is spin glass models. |Choromanska et al. (2015a) have\ncreated a link between conventional networks and such models, which leads to a comprehensive\n\nstudy of the critical points of neural networks based on the spin glass analysis of |Auffinger et al.\n(2013). In our work, we generalize these results and link ResNets to generalized spin glass models.\nThese models allow us to analyze the dynamic behavior presented above. Finally, we apply the\nresults of Auffinger & Arous|(2013) in order to study the loss surface of ResNets."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deep Residual Networks present a premium in performance in comparison to con-\nventional networks of the same depth and are trainable at extreme depths. It has\nrecently been shown that Residual Networks behave like ensembles of relatively\nshallow networks. We show that these ensembles are dynamic: while initially\nthe virtual ensemble is mostly at depths lower than half the network\u2019s depth, as\nraining progresses, it becomes deeper and deeper. The main mechanism that con-\nols the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch\nNormalization technique. We explain this behavior and demonstrate the driving\nforce behind it. As a main tool in our analysis, we employ generalized spin glass\nmodels, which we also use in order to study the number of critical points in the\nyptimization of Residual Networks.\nThe success of residual vee (Heart was BUTE to the ability to train very deep networks when\nemploying skip connections A complementary view is presented by [Veit et al.\n, who attribute it to eae power 4 Sot oles and present an unraveled view of ResNets that\ndepicts ResNets as an ensemble of networks that share weights, with a binomial depth distribution\naround half depth. They also present experimental evidence that short paths of lengths shorter than\nhalf-depth dominate the ResNet gradient during training.\nThe analysis presented here shows that ResNets are ensembles with a dynamic depth behavior.\nWhen starting the training process, the ensemble is dominated by shallow networks, with depths\nlower than half-depth. As training progresses, the effective depth of the ensemble increases. This\nincrease in depth allows the ResNet to increase its effective capacity as the network becomes more\nand more accurate."}, {"section_index": "3", "section_name": "2 A RECAP OF|CHOROMANSKA ET AL.|(2015", "section_text": "path are producing positive activations, and the product | |,,_, w;.\u2019 represents the specific weight\nDefinition 1. The mass of the network N is defined as ) = []?_, n.\nE4lY] = Yowell wi\n\ni=1 j=\nLhe(w) =E,[max(0,1-Y;Y)], L%-(w) = Ea[|\u00a52 \u2014 Y|]\nwhere Y,, is a random variable corresponding to the true label of sample x. In order to equate either\nloss with the hamiltonian of the p-spherical spin glass model, a few key approximations are made:\nA4 Spherical constraint - The following is assumed:\nThese assumptions are made for the sake of analysis, and do not necessarily hold. The validity of\nthese assumption was posed as an open problem in|Choromanska et al.|(2015b), where a different\ndegree of plausibility was assigned to each. Specifically, A1, as well as the independence assumption\nof A;;, were deemed unrealistic, and A2 - A4 as plausible. For example, Al does not hold since\neach input x; is associated with many different paths and 7;; = xj2 = ...tjy. See c horomanska\n\n2015a) for further justification of these approximations.\nWe briefly summarize |Choromanska et al.|(2015a), which connects the loss function of multilayer\n\nnetworks with the hamiltonian of the p spherical spin glass model, and state their main contributions\nand results. The notations of our paper are summarized in Appendix[A]and slightly differ from those\n\nin|Choromanska et al. (2015a).\nA simple feed forward fully connected network NV, with p layers and a single output unit is consid-\nered. Let n; be the number of units in layer 7, such that no is the dimension of the input, and n, = 1.\nIt is further assumed that the ReLU activation functions denoted by R() are used. The output Y of\nthe network given an input vector x \u20ac R\u00a2@ can be expressed as\nwhere the first summation is over the network inputs x 1...cqg, and the second is over all paths from\ninput to output. There are y = Th, n, such paths and Vi, xj) = 22 = ...%;,. The variable\nAijy \u20ac {0, 1} denotes whether the path is active, i.e., whether all of the ReLU units along this\npath are producing Positive activations, and the product Th. =] wh *) represents the specific weight\n\nconfiguration w} ij wk *; multiplying x; given path j. It is assumed throughout the paper that the input\nvariables are sampled i 1.i.d from a normal Gaussian distribution.\nThe variables A;; are modeled as independent Bernoulli random variables with a success probability\np, i.e., each path is equally likely to be active. Therefore,\nThe task of binary classification using the network V with parameters w is considered, using either\nthe hinge loss \u00a3\u201d, or the absolute loss \u00a34;:\n\\2 Redundancy in network parameterization - It is assumed the set of all the network weights\n[w1, W2...wy] contains only A unique weights such that A < N.\n\n\\3 Uniformity - It is assumed that all unique weights are close to being evenly distributed on the\ngraph of connections defining the network A\u2019. Practically, this means that we assume every\nnode is adjacent to an edge with any one of the A unique weights.\nA\n\nSow? =C\n\ni=l\nUnder A1\u2014A4, the loss takes the form of a centered Gaussian process on the sphere S$4~!(v\u2018A).\nSpecifically, it is shown to resemble the hamiltonian of the a spherical p-spin glass model given by:\nwhere x;,...;,, are independent normal Gaussian variables.\nIn[Auffinger et al.|(2013), the asymptotic complexity of spherical p spin glass model is analyzed\nbased on random matrix theory. In[Choromanska et al] (20752) these results are used in order to\nshed light on the optimization process of neural networks. For example, the asymptotic complexity\nof spherical spin glasses reveals a layered structure of low-index critical points near the global op-\ntimum. These findings are then given as a possible explanation to several central phenomena found\nin neural networks optimization, such as similar performance of large nets, and the improbability of\n\ngetting stuck in a \u201cbad\u201d local minima.\nAs part of our work, we follow a similar path. First, a link is formed between residual networks and\nthe hamiltonian of a general multi-interaction spherical spin glass model as given by:\nwhere \u20ac,...\u20ac, are positive constants. Then, using/Auffinger & Arous| , we obtain insights on\n\nresidual networks. The other part of our work studies the dynamic behavior of residual networks,\nwhere we relax the assumptions made for the spin glass model.\nWe begin by establishing a connection between the loss function of deep residual networks and the\nhamiltonian of the general spherical spin glass model. We consider a simple feed forward fully\nconnected network A\u2019, with ReLU activation functions and residual connections. For simplicity o!\nnotations without the loss of generality, we assume nj = ... = Ny\u00bb = n. no = das before. In ou\nResNet model, there exist p \u2014 1 identity connections skipping a single layer each, starting from the\nfirst hidden layer. The output of layer / > 1 is given by:\nNi(a) = R(W,' Nii (x)) + M1 (2)\nd\nye ly. ral oT wom\n\nr=1 i=1 j=l\nDefinition 2. The mass of a depth r subnetwork in N is defined as\nThe properties of redundancy in network parameters and their uniform distribution, as described in\nSec.|2} allow us to re-index Eq.|9\nwhere W denotes the weight matrix connecting layer | \u2014 1 with layer /. Notice that the first hidden\nlayer has no parallel skip connection, and so Ni (x) = R(W,' x). Without loss of generality, the\nscalar output of the network is the sum of the outputs of the output layer p and is expressed as\nwhere Ay \u20ac {0,1} denotes whether path 7 of length r is open, and Vj, j\u2019,7,7\u2019 x7. The\nresidual connections in NV imply that the output Y is now the sum of products of different \u2018haste\nindexed by r. Since our ResNet model attaches a skip connection to every layer except the first,\n1 <r <p. See Sec.[6]regarding models with less frequent skip connections.\nEach path of length r includes r \u2014 1 non-skip connections (those involving the first term in Eq.\nand not the second, identity term) out of layers | = 2..p. Therefore, 7, = (eo i)n . We define the\nfollowing measure on the network:\nLemma 1. Assuming assumptions A2 \u2014 A4 hold, and % \u20ac Z, then the output can be expressed\nafter reindexing as:\nIn order to connect ResNets to generalized spherical spin glass models, we denote the variables:\n_ j - Singin\nbinionin = oT i ie Bait = Ee. 5 5\n\nj= 1 12...tr\n\nie\n\nNIK\n\n\u00bb\nLemma 2. Assuming A2 \u2014 A3 hold, and 2 \u20ac N then V,.;,__;.. the following holds:\nThe independence assumption A1 was not assumed yet, and[I4]holds regardless. Assuming A4 and\ndenoting the scaled weights \u00ab; = 4w,, we can link the distribution of Y to the distribution on :\nThe following lemma gives a generalized expression for the binary and hinge losses of the network.\nwhere C1, C2 are positive constants that do not affect the optimization process\nNote that since the input variables :\npendent or not), then the set of variable:\n\nXa are sampled from a centered Gaussian distribution (de-\n\n| io...4,. are dependent normal Gaussian variables.\n1 wv, Wp\n\n5 (at)? SEG ia. nind SRE.\nWe approximate the expected output E,(Y) with Y by assuming the minimal value in|13|holds\nsuch that V,.i,..i, ElE? :5.;,] = 4(4\u00a3)?. This approximation holds exactly when A = n, since\nall weight configurations of a particular length in Eq. [10] will appear the same number of times.\nWhen A # n, the uniformity assumption dictates that each configuration of weights would appear\napproximately equally regardless of the inputs, and the expectation values would be very close to\n\nthe lower hound. The followjne exnreccion for Y ic thue ohtained:\nLy (x) = C1) + C2Y\nWe denote the important quantities\nTheorem 1. Assuming op EN, we have that:\n1 __6\nlim - arg max(e,) = i+B\npoco p r\nTheorem 2. For any a, < we. <Q, and assuming ayp, a2p, op EN, it holds that:\n2p\nThm. 2] implies that for deep residual networks, the contribution of weight products of order far\naway from the maximum weap is negligible. The loss is, therefor, similar in complexity to that of\n\nan ensemble of potentially shallow conventional nets. The next Lemma shows that we can shift the\neffective depth to any value by simply controlling C.\nLemma 4. For any integer 1 < k < p there exists a global scaling parameter C' such tha\narg max,,(e,(C)) = k.\nThe expression for the output of a residual net in Eq.}15|provides valuable insights into the machinery\nat work when optimizing such models. Thm. [IJand mply that the loss surface resembles that of an\nensemble of shallow nets (although not a real ensemble due to obvious dependencies), with various\ndepths concentrated in a narrow band. As noticed in|Veit et al.| ), viewing ResNets as ensembles\nof relatively shallow networks helps in explaining some of the apparent advantages of these models,\nparticularly the apparent ease of optimization of extremely deep models, since deep paths barely\naffect the overall loss of the network. However, this alone does not explain the increase in accuracy\nof deep residual nets over actual ensembles of standard networks. In order to explain the improved\nperformance of ResNets, we make the following claims:\nhas the form of a spin glass model, except for the dependency between the\nvariables Z;, i....;,. We later use an assumption similar to A1 of independence between these vari-\nables in order to link the two binary classification losses and the general spherical spin glass model.\nHowever, for the results in this section, this is not necessary.\nThe series (\u20ac,)?_, determines the weight of interactions of a specific length in the loss surface. No-\ntice that for constant depth p and large enough {, arg max,.(\u00a2,) = p. Therefore, for wide networks,\nwhere n and, therefore, ( are large, interactions of order p sane the loss surface, and the effect\nof the residual connections diminishes. Conversely, for constant ( and a large enough p (deep net-\nworks), we have that arg max,.(\u00a2,.) < p, and can expect interactions of order r < p to dominate the\nloss. The asymptotic behavior of \u20ac is captured by the following lemma:\n\\s the next theorem shows, the epsilons are concentrated in a narrow band near the maximal value\nA simple global scaling of the weights is, therefore, enough to change the loss surface, from an\n\nensemble of shallow conventional nets, to an ensemble of deep nets. This is illustrated in Fig. [IJa-c\nfor various values of 3. Ina common weight initialization scheme for neural networks, C = Sa (Orr\n\n& Miiller| 2003}|Glorot & Bengio}|2010). With this initialization and A = n, 3 = p and the maximal\nweight is obtained at less than half the network\u2019s depth limp_,.. arg max,.(\u20ac,) < b. Therefore, at\nthe initialization, the loss function is primarily influenced by interactions of considerably lower order\n\nthan the depth p. which facilitates easier optimization.\n1. The distribution of the depths of the networks within the ensemble is controlled by th\nscaling parameter C.\nfor the remainder of Sec.4, we relax all assumptions, and assume that at some point in_time\n\nx by _, w? = C?, and A = N. Using Eq.|9|for the output of the network Y in Lemma.|3| the\nloss can be expressed:\nd\nLy (x,w) =C, > S> SS APT [ue (*)\n\nr=1 i=l j=1 k=1\nIr\n\nOLn(z,w) _ C2 . (r) 4(r) (r)(h)\nae -ohrh Xi; Ay Ile!\n\nr=1 j=l k=1\nNotice that the addition of a multiplier r indicates that the derivative is increasingly influenced by\ndeeper networks."}, {"section_index": "4", "section_name": "4.1 BATCH NORMALIZATION", "section_text": "Batch normalization has shown to be a crucial factor in the successful training of deep residual\nnetworks. As we will show, batch normalization layers offer an easy starting condition for the\nnetwork, such that the gradients from early in the training process will originate from extremely\nshallow paths.\nWe consider a simple batch normalization procedure, which ignores the additive terms, has the out-\nput of each ReLU unit in layer / normalized by a factor o; and then is multiplied by some parameter\nA,. The output of layer / > 1 is therefore:\nNila) = SRW Ni-a(e)) +N (a)\nwhere a; is (e} mean of the estimated standard deviations of various elements in the vecto1\n\nRW M- 1(x)). Furthermore, a typical initialization of batch normalization parameters is to set\nVi, A= 1. In this case, providing that units in the same layer have equal variance o;, the recursive\nrelation EN} 41(x)7] = 1 + ELM (x)]] holds for any unit J in layer J. This, in turn, implies that the\noutput of the ReLU units should have increasing variance o as a function of depth. Multiplying the\nweight parameters in deep layers with an increasingly small scaling factor oe effectively reduces\nthe influence of deeper paths, so that extremely short paths will dominate the early stages of opti-\nmization. We next analyze how the weight scaling, as introduced by batch normalization, provides\na driving force for the effective ensemble to become deeper as training progresses.\nWe consider a simple network of depth p, with a single residual connection skipping p \u2014 m layers.\nWe further assume that batch normalization is applied at the output of each ReLU unit as described\nin Eq. [22] [22] We denote by 1)...1;, the | indices of layers that are not skipped by the residual connection,\n\nN\nand \\m Te,\n\ni=1 Gi,\n\n\u00bb Ap Pe iA . Since every path of length m is multiplied by Mins and every\n2. During training, C' changes and causes a shift of focus from a shallow ensemble to deeper\nand deeper ensembles, which leads to an additional capacity.\n\n3. In networks that employ batch normalization, C is directly embodied as the scale parameter\nX. The starting condition of A = 1 offers a good starting condition that involves extremely\nshallow nets.\nwhere C, C'2 are some constants that do not affect the optimization process. In order to gain addi-\ntional insight into this dynamic mechanism, we investigate the derivative of the loss with respect to\nthe scale parameter C\u2019. Using Eq.|9]for the output, we obtain:\n(c)\n\n(a)\n\n(d)\n\n(f)\n\n(e)\nFigure 1: (a) A histogram of e,(8), r = 1..p, for 8 = 0.1 and p = 100. (b) Same for 6 = 0.5\n(c) Same for 8 = 2. (d) Values (y-axis) of the batch normalization parameters ; (x-axis) for\n10 layers ResNet trained to discriminate between 50 multivariate Gaussians (see Appendix [C] for\nmore details). Higher plot lines indicate later stages of training. (e) The norm of the weights of a\nresidual network, which does not employ batch normalization, as a function of the iteration. (f) The\nasymptotic of the mean number of critical points of a finite index as a function of 6.\nd Ym\n\n(,w =n OT A T) wl 45 oyna! 2) ae) I~ ()\n\ni=l j=l k=1 i=1 j=l k=1\n= L(x, w) + Lp(x,u\nWe denote by V the derivative operator with respect to the parameters w, and the gradient g =\nVwLy(x,w) = gm + gp evaluated at point w.\nOL (x, w \u2014 Hg)\nX= Ko\n\nOn\n\n> |r|\nOLN (x, w \u2014 Hg)\n\nAL Eb OM\n\n> |Ar|\nrhm. [3] suggests that |;| will increase for layers / that do not have skip-connections. Conversely,\nf layer J has a parallel skip connection, then |.;| will increase if ||g,||2 > ||gm||2, where the later\n-ondition implies that shallow paths are nearing a local minima. Notice that an increase in |Aj\u00a21, ...1,,,|\n\n\u2018esults in an increase in pl, while \\m| remains unchanged, therefore shifting the balance into\nJeeper ensembles.\nThis steady increase of |A;|, as predicted in our theoretical analysis, is also backed in experimen-\ntal results, as depicted in Fig. {I{d). Note that the first layer, which cannot be skipped, behaves\ndifferently than the other layers. More experiments can be found in Appendix|C]\nd Ym r\n\nCy = OPS SI A\u201d TP we OSS AMA TL ao\nk=1\n\ni=1 j=l k=1 i=l j=l\n= Lp(a, w) + Ly(2, w)\nOLN (x, w \u2014 Lg ,\n19) 1.2 (mllgn 3 + vllgp 3 + (m +P); 9)\nThm. /4 indicates that if either ||g,||2 or ||gm||2 is dominant (for example, near local minimas of\nthe shallow network, or at the start of training), the scaling of the weights C\u2019 will increase. This\nexpansion will, in turn, emphasize the contribution of deeper paths over shallow paths, and in-\ncrease the overall capacity of the residual network. This dynamic behavior of the effective depth of\nresidual networks is of key importance in understanding the effectiveness of these models. While\noptimization starts off rather easily with gradients largely originating from shallow paths, the overall\nadvantage of depth is still maintained by the dvnamic increase of the effective depth.\nWe now present the results of|Auffinger & Arous|(2013) regarding the asymptotic complexity in the\n\ncase of lim,_,., of the multi-spherical spin glass model given by:\nA\n\nr ~ ~\nTi, i, Wig Wi,\n\n1\nau\u201d ul\n=v +u \u2014v\n\n2\n\nQa\nNote that for the single interaction spherical spin model a? = 0. The index of a critical point of\nHZ,,x is defined as the number of negative eigenvalues in the hessian V?He.a evaluated at the critical\npoint w.\nDefinition 4. For any 0 < k < Aandu \u00a9 R, we denote the random number Crt ,;,(u, \u20ac) as the\n\nnumber of critical points of the hamiltonian in the set BX = {AX|X \u20ac (\u2014o00, u)} with index k.\nTho\u00a2 toe\nCrtan(ue)= So {Hea \u20ac Au} 1 {i(V? Hea) =f}\n\nw:V He,a=0\nIt is worth noting that the mechanism for this dynamic property of residual networks can also be\nobserved without the use of batch normalization, as a steady increase in the L2 norm of the weights,\nas shown in Fig. {Tfe). In order to model this, consider the residual network as discussed above,\nwithout batch normalization layers. Recalling, ||w||2 = CVA,w = %, the loss of this network is\nexpressed as:\nwhere Jj; are independent centered standard Gaussian variables, and \u20ac = (\u20ac,),>2 are positive\nreal numbers such that re \u20ac,2\" < oo. A configuration w of the spin spherical spin-glass model\nis a vector in R\u201c satisfying the spherical constraint:\nFurthermore, define 6;(u, \u20ac) = lima_.oo x log E[Crta,,(ue)]. Corollary 1.1 of|Auffinger & Arou:\n(2013) states that for any k > 0:\nEq. [33]provides the asymptotic mean total number of critical points with non-diverging index k. It is\npresumed that the SGD algorithm will easily avoid critical points with a high index that have many\ndescent directions, and maneuver towards low index critical points. We, therefore, investigate how\nthe mean total number of low index critical points vary as the ensemble distribution embodied in\n(\u20ac,),s2 Changes its shape by a steady increase in (3.\nTheorem 5. For any k \u20ac N,p > 1, we denote the solution to the following constrained optimization\nnrohblemes:\nP\ne* = argmax0;(R,e) s.t e=l\n\u20ac rae\nThm. |5]implies that any heterogeneous mixture of spin glasses contains fewer critical points of a\nfinite index, than a mixture in which only p interactions are considered. Therefore, for any distribu-\ntion of \u20ac that is attainable during the training of a ResNet of depth p, the number of critical points is\nlower than the number of critical points for a conventional network of depth p."}, {"section_index": "5", "section_name": "6 DISCUSSION", "section_text": "In this work, we use spin glass analysis in order to understand the dynamic behavior ResNets dis-\nplay during training and to study their loss surface. In particular, we use at one point or another the\nassumptions of redundancy in network parameters, near uniform distribution of network weights, in-\ndependence between the inputs and the paths and independence between the different copies of the\n\ninput as described in{Choromanska et al.](2015a). The last two assumptions, i.e., the two indepen-\ndence assumptions, are deemed in|Choromanska et al.\n\nas unrealistic, while the remaining\nare considered plausible.\nOur analysis of critical points in ensembles (Sec. 5) requires all of the above assumptions. However\nThm. | and 2, as well as Lemma. 4, do not assume the last assumption, i.e., the independence\nbetween the different copies of the input. Moreover, the analysis of the dynamic behavior of residua\nnets (Sec. 4) does not assume any of the above assumptions.\nOur results are well aligned with some of the results shown in [Larsson et al.] , where it is\nnoted empirically that the deepest column trains last. This is reminiscent of our claim that the deeper\nnetworks of the ensemble become more prominent as training progresses. The authors of [Larsson\n6) hypothesize that this is a result of the shallower columns being stabilized at a certain\npoint of the training process. In our work, we discover the exact driving force that comes into play.\n[n addition, our work offers an insight into the mechanics of the recently proposed densely connectec\nnetworks 2016). Following the analysis we provide in Sec. 3, the additional shortcu\npaths decrease the initial capacity of the network by offering many more short paths from inpu'\n(0 output, thereby contributing to the ease of optimization when training starts. The driving force\nmechanism described in Sec. 4.2 will then cause the effective capacity of the network to increase.\nNote that the analysis presented in Sec. 3 can be generalized to architectures with arbitrary skip\nconnections, including dense nets. This is done directly by including all of the induced sub networks\nin Eq.[9] The reformulation of Eq.[T0|would still holds, given that W,. is modified accordingly.\n1 v!\n0,(R,\u20ac) = gloat) ral\nFig. [If shows that as the ensemble progresses towards deeper networks, the mean amount of low\nindex critical points increases, which might cause the SGD optimizer to get stuck in local minima.\nThis is, however, resolved by the the fact that by the time the ensemble becomes deep enough,\nthe loss function has already reached a point of low energy as shallower ensembles were more\ndominant earlier in the training. In the following theorem, we assume a finite ensemble such that\n\u00abjl T=P\n7 0, otherwise"}, {"section_index": "6", "section_name": "7 CONCLUSION", "section_text": "Ensembles are a powerful model for ResNets, which unravels some of the key questions that have\nsurrounded ResNets since their introduction. Here, we show that ResNets display a dynamic en-\nsemble behavior, which explains the ease of training such networks even at very large depths, while\nstill maintaining the advantage of depth. As far as we know, the dynamic behavior of the effective\ncapacity is unlike anything documented in the deep learning literature. Surprisingly, the dynamic\nmechanism typically takes place within the outer multiplicative factor of the batch normalization\nmodule."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Antonio Auffinger and Gerard Ben Arous. Complexity of random smooth functions on the high\ndimensional sphere. Annals of Probability, 41(6):42 14-4247, 11 2013.\nAnna Choromanska, Yann LeCun, and G\u00e9rard Ben Arous. Open problem: The landscape of the loss\nsurfaces of multilayer networks. In COLT, pp. 1756-1760, 2015b.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural\nnetworks. In AJSTATS, 2010.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-\nnition. arXiv preprint arXiv:1512.03385, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual\nnetworks. arXiv preprint arXiv: 1603.05027, 2016.\nGao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks.\narXiv preprint arXiv: 1608.06993, 2016.\nSergey loffe and Christian Szegedy. Batch normalization: Accelerating deep network training by\nreducing internal covariate shift. In JCML, pp. 448-456, 2015.\nGustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net-\nworks without residuals. arXiv preprint arXiv: 1605.07648, 2016.\nGenevieve B Orr and Klaus-Robert Miiller. Neural networks: tricks of the trade. Springer, 2003."}, {"section_index": "8", "section_name": "A SUMMARY OF NOTATIONS", "section_text": "Table[T]presents the various symbols used throughout this work and their meaning\nAnna Choromanska, Mikael Henaff, Micha\u00e9l Mathieu, G\u00e9rard Ben Arous, and Yann LeCun. The\nloss surfaces of multilayer networks. In AJSTATS, 2015a.\nRupesh Kumar Srivastava, Klaus Greff, and Jiirgen Schmidhuber. Highway networks. arXiv preprint\narXiv: 1505.00387, 2015.\nAndreas Veit, Michael Wilber, and Serge Belongie. Residual networks behave like ensembles of\nrelatively shallow networks. In NIPS, 2016."}, {"section_index": "9", "section_name": "SYMBOL", "section_text": "The dimensionality of the input x\n\nThe output of layer 7 of network NV given input x\n\nThe final output of the network VV\n\nTrue label of input x\n\nLoss function of network VV\n\nHinge loss\n\nAbsolute loss\n\nThe depth of network VV\n\nWeights of the network w \u20ac R\u201c\n\nA positive scale factor such that ||w||2 = VAC\n\nScaled weights such that w = ow\n\nThe number of units in layers 1 > 0\n\nThe number of unique weights in the network\n\nThe total number of weights in the network VV\n\nThe weight matrix connecting layer / \u2014 1 to layer J in NV.\n\nThe hamiltonian of the p interaction spherical spin glass model.\n\nThe hamiltonian of the general spherical spin glass model.\n\nTotal number of paths from input to output in network VV\n\nyd\n\nTotal number of paths from input to output in network VV of length r\nyd\n\nReLU activation function\n\nBernoulli random variable associated with the ReLU activation functio\nParameter of the Bernoulli distribution associated with the ReLU unit\n\nmultiplier associated with paths of length r in NV.\npnC\n\nNormalization factor.\n\nBatch normalization multiplicative factor in layer |.\n\nThe mean of the estimated standard deviation various elements in (VV\nProof of Lemma{2| From|12| we have that \u20ac;, i,...i,. is defined as a sum of 2 Ar inputs. Since there are\nonly p distinct inputs, it holds that for each &;, i....;,. there exists a sequence @ = (ova \u20ac N such\nthat 4, a = Be and Ei: i9.cin = = 1, a;2;. We, therefore, have that E[\u00e9? 2 ini.) = llaull.\nNote that the minimum value of E[\u00e9? , is a solution to the following:\n\nby\nYr 7 yp\nmin(E[E, i,...i,]) = Mina (|lall2) 8. lalla = Ar? (aidiar EN,\nProof of Lemma{]| There are a total of w, paths of length r from input to output, and a total of\nA\u201d unique r length configurations of weights. The uniformity assumption then implies that each\n\nconfiguration of weights is repeated \u201c= times. By summing over the unique configurations, and re\nindexing the input we arrive at Eq\njim Zloa(( 2.) 3%\") = H(0) + atag(9)\nProof of Tm For brevity, we provide a sketch of the proof. It is enough to show that\naip\n\nlimy 50 yoo , \u20ac. = 0 for 8 < 1. Ignoring the constants in the binomial terms, we have:\nap ym P (py? pr\n\n2 2201p\nP \"6\n1P(atp) 4\n< im \u2014\u2014.___\n6; = lim\n\n5 2\n\u201c poo\npoo z\nhere 27 = S~?_, (?)\u201d6?\", which can be expressed using the Legendre polynomial of order p:\nProof of Lemma{A| For simplicity, we ignore the constants in the binomial coefficient, and assume\n\n\u20ac, = 4() 8\". Notice that for B= (b b)> we have that arg max,.(\u00a2,.(8*)) = p, arg max,.(e,\n1 and arg max,.(e,(1)) =\n\n. From the monotonicity and continuity of 6\", any value 1 > k > pcan\nbe attained. The linear dependency B(C) = ene completes the proof.\nOLn(2,w\u2014pg) _ WLy(x,w) OLN (x, w)\nON ~\u2014~on WNw\u2014 ay, 9\nOLy (x, w \u2014 wg. 1 , )\nDA w) \u00a9 0= 2>-(Gm + Gp)\" (Gm + Gp) = \u2014HZIl9m + Gpll3 <6\nOr vf 1\nIn(1 +2 3\nxz ligm + gpl) = Lal + 45\n|Ai|(1 + yp) 2 bl\nOLy (x, w = Hg)\nOr\n\n1 1 \u2018\n~0-4 )\" Gp \u201chy (9n9p + \\IGpll:\n\nui (9m + Gp\n2\n\n=(1-\n\nBPP \u2014 ae\n\n1+\n\n3?)\nC\u00bb(a, w)). Using taylor series expansion:\n\nOLy(z,w\u2014pg) _ OLn(x,w) OLN (x, w)\n: x V - 40\non an NN an, \u201c)\nSubstituting Vy XG) = x (Gm +p) in|40]we have:\nOLy (x, w \u2014 pg 1 1\n0 \u2014 pm + 9p)\" Om + Gp) =\u2014H [lm + 9pl3 <0 (AL)\nOr MI ay)\nAnd hence:\nALN (x, w \u2014 HG) 1\nMa Dy TE HPS lhgm + plo\n1\n=N(1+ 2? sy |l9m + Gplz) 42)\n7\nFinally:\n1 1\n(1 + 1\u00b0 S5llgm + Gplla)| = Ad + #55) = Pal (43)\nHT HT\n2. Since paths of length m skip layer /, we have that Vz Pex) = XIp- Therefore:\nOLy (x, w \u2014 ng) 1 1\nDy \u00a90\u2014 BY (Gm + Ge)\" 9p = HZ (GmGe + ligell) (44)\n\nThe condition ||gp||2 > ||gm||2 implies that g,\",gp + ||gp||3 > 0, completing the proof.\nOLy (a,w) VA 1 tT\nVw\u2014 aq 9 = (mL (a, w) + pLy(a, w))Vw oe? + Gilman +P9p)' g\nwig 1 1\n= (ML n(x, w) + pLp(x, w)) rors t GlMGm t Pv) 'g Gilman t PQ\u00bb) ' 9;\nJL N (az, W \u2014 HGw) 1\nYel =0- HG (MI t PG\u00bb) (Gm + Jp)\n\n1\nHe (mllgp|l3 + Pllgp|la + (m+ P)gp Gm\nelV\"e) e'(V\" \u2014V'e\ne!V'e el (V\" + Vie\n\n1\nOx(R, \u20ac) = Slog(\n1 el Ve el (V\" Ve\nmare, (R,\u20ac) < mare(slog( rr, )) - mine OT Ve)\n\n2\n\nploa(p \u2014 1) ~ (= =) = 6x(Re*)"}, {"section_index": "10", "section_name": "\u2019 ADDITIONAL EXPERIMENTS", "section_text": "Fig. 1(d) and 1(e) report the experimental results of a straightforward setting, in which the task i\nto classify a mixture of 10 multivariate Gaussians in 50D. The input is therefore of size 50. Th\nloss employed is the cross entropy loss of ten classes. The network has 10 blocks, each containin;\n20 hidden neurons, a batch normalization layer, and a skip connection. Training was performed o1\n10,000 samples, using SGD with minibatches of 50 samples.\nNext, we provide additional experiments performed on the public CIFAR-10 and CIFAR-100 dat\n\nsets (Krizhevsky||2009). The public ResNet code of|https://github.com/facebook/fb.\n\nresnet.torchiis used for networks of depth 32.\nAs noted in Sec. 4.2, the dynamic behavior can be present in the Batch Normalization multiplica:\ntive coefficient or in the weight matrices themselves. In the following experiments, it seems tha\nis orthogonal to the weights. We have that den (at) =a L(mLm(a,w) + pLlp(x, w)). Using taylor\nseries expansion we have:\n\nOLn (a, w \u2014 wg) w CEn (a, w)\noC oC\n\nOLy (x, w)\noc\n\nUV w (45)\n\nFor the last term we have:\n\nAL (0.2) 9 _ VK\n\n1\nVw = (MLin(x, w) + pLp(x, w)) Vw 9 + (mgm + pgp) 'g\n|wll2\" Cc\n\noc\nwl 1 1\n= (ML m(0, 8) + PL yl, w)) P+ BlMGm + PG)\" I= GlMIn + PG)\" 9 (46)\nwhere the last step stems from the fact that w'g = 0. Substituting V., dex tow) = Fal (mgm +PGp)\n\nin[45]we have:\n\nALy(e,w=HGw) 9 pa. (a, 4\naC \u00a9 0\u2014 EG (mgm + PIP)! (Gm + Gp)\n\n1\nHe (mllgpl3 + Pllgplla + (m+ P)gp Im) (AT)\n\nProof of Thm{5| Inserting Eq. [31]into Eq.[33|we have that:\n\n1 P_,e\u20acr(r\u20141) Py er(r \u2014 2)\nO4(R, \u20ac) = slog \"Sp\" ) \u2014 \"Spa (48)\nr=2 Gl r=2 &\nWe denote the matrices V\u2019 and V\" such that Vj; = r6;; and V{i = r(r \u2014 1)d;;. We then have:\n1 elV\"e. el (V\"-V'e\nO.(R sl \u2014 4\nw(R,\u20ac) 2 og TV? el(V\" + Ve )\n\nelV\"e) . (ev \u2014V'e\ne'Vle mine el(V\" FV)\n\n1\n= Flog (macs(Vivii*)) \u2014 mins ( (48 = Van(vel + Vay\")\n\n1\nmaze; (R, \u20ac) < maxe(5log(\n\nslog(p 1) ~~ =) =A(Re*) 50)\n\nnm A writ natant orwrenrnmrrama\nOLN (x, w)\n\nOLy(z,w\u2014 pg) _ OLn(x,w)\nYa S9G Hw\n\nOC\nP\nr=2\n\ne2r(r \u2014 2)\n\nP\n\ne2r2\n\nr=\u20142 &p\nFig. 2 depicts the results. There are two types of plots: Fig. Pfa.c) presents for CIFAR-10 anc\nCIFAR-100 respectively the magnitude of the various convolutional layers for multiple epochs (sim:\nilar in type to Fig. 1(d) in the paper). Fig. 2[b.d) depict for the two datasets the mean of these norm:\nover all convolutional layers as a function of epoch (similar to Fig. 1(e)).\nAs can be seen, the dynamic phenomenon we describe is very prominent in the public ResNet\nimplementation when applied to these conventional datasets: the dominance of paths with fewer\nskip connections increases over time. Moreover, once the learning rate is reduced in epoch 81 the\nphenomenon we describe speeds up.\nIn Fig. [3] we present the multiplicative coefficient of the Batch Normalization when not absorbed.\nAs future work, we would like to better understand why these coefficients start to decrease once the\nlearning rate is reduced. As shown above, taking the magnitude of the convolutions into account.\nthe dynamic phenomenon we study becomes even more prominent at this point. The change of\nlocation from the multiplicative coefficient of the Batch Normalization layers to the convolutions\nthemselves might indicate that Batch Normalization is no longer required at this point. Indeed.\nBatch Normalization enables larger training rates and this shift happens exactly when the training\nrate is reduced. A complete analysis is left for future work.\nuntil the learning rate is reduced, the dynamic behavior is manifested in the Batch Normaliza-\n\u2018ion multiplicative coefficients and then it moves to the convolution layers themselves. We there-\nfore absorb the BN coefficients into the convolutional layer using the public code of\n//github.com/e-lab/torch-toolbox/tree/master/BN- absorber} Note that the\nmultiplicative coefficient of Batch Normalization is typically refereed to as y. However, throughout\nour paper, since we follow the notation of |Choromanska et al. (2015p, 7 refers to the number of\npaths. The multiplicative factor of Batch normalization appears as \\ in Sec. 4.\n\u2018mean weight nom igamma is absorbed)\n\n(b)\n\nMean norm of convolution layers a8 a function of epoch for cifarso0\n\n\u2018mean weight nom igamma is absorbed)\nA A i a A\n\u201cigure 2: (a,c) The Norm of the convolutional layers once the factors of the subsequent Batch\nNormalization layers are absorbed, shown for CIFAR-10 and CIFAR-100 respectively. Each graph\ns a different epoch, see legend. Waving is due to the interleaving architecture of the convolutional\nayers. (b,d) Respectively for CIFAR-10 and CIFAR-100, the mean of the norm of the convolutional\nayers\u2019 weights per epoch.\nFigure 3: The norms of the multiplicative Batch Normalization coefficient vectors. (a,c) The Norn\nof the coefficients, shown for CIFAR-10 and CIFAR-100 respectively. Each graph is a differen\nepoch (see legend). Since there is no monotonic increase between the epochs in this graph, it is\nharder to interpret. (b,d) Respectively for CIFAR-10 and CIFAR-100, the mean of the norm of the\nmultiplicative factors per epoch.\nBatch Normalization gamma per layer for multiple epochs for cifario\n\n\u2018Mean norm of Batch Normalization gamma vectors as a function of epoch for cifario\n\nMean norm of Batch Normalization gamma vectors aa function of epoch for eifars0o\n\n(a)\n\nBatch Normalization gamma per layer for multiple epochs for cifarso0\n\n(c)"}]
BJxhLAuxg
[{"section_index": "0", "section_name": "A DEEP LEARNING APPROACH FOR JOINT VIDEC\nFRAME AND REWARD PREDICTION IN ATARI GAMES", "section_text": "Felix Leibfried \u00b0\nfelix.leibfried@gmail.com\nReinforcement learning is concerned with learning to interact with environments\nthat are initially unknown. State-of-the-art reinforcement learning approaches,\nsuch as DQN, are model-free and learn to act effectively across a wide range of\nenvironments such as Atari games, but require huge amounts of data. Model-\nbased techniques are more data-efficient, but need to acquire explicit knowledge\nabout the environment dynamics or the reward structure."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "When humans or animals receive reward for taking a particular action in a given situation, the prob-\nability is increased that they will act similarly in similar situations in the future. This is described\nby principles such as the law of effect (Thorndike} |T898), operant conditioning and\ntrial-and-error learning in behaviorist psychology, and has inspired a discipline of\n\nartificial intelligence cal! ment learning (RL, Sutton & Barto} (1998). RL is concerned\n\nwith finding optimal behavior policies in order to maximize agents\u2019 cumulative future reward.\nApproaches to RL can be divided into model-free and model-based approaches. In model-free ap\nproaches, agents learn by trial and error but do not aim to explicitly capture the dynamics of the envi\nronment or the structure of the reward function underlying the environment. State-of-the-art model\nfree approaches, such as DQN (Mnih et al.||2015), effectively approximate so-called Q-values, i.e\nthe value of taking specific actions in a given state, using deep neural networks. The impressiv\neffectiveness of these approaches comes from their ability to learn complex policies directly fror\nhigh-dimensional input (e.g., video frames). Despite their effectiveness, model-free approaches re\nquire large amounts of training data that have to be collected through direct interactions with th\nenvironment, which makes them expensive to apply in settings where interactions are costly (suc\nas most real-world applications). Additionally, model-free RL requires access to reward observa\ntions during training, which is problematic in environments with sparse reward structure\u2014unles\ncoupled with an explicit exploration mechanism.\nRL approaches that explicitly learn statistics about the environment or the reward are generally\nreferred to as model-based\u2014in a more narrow definition these statistics comprise environment dy-\nnamics and the reward function. In recent work, model-based techniques were successfully usec\nto learn statistics about cumulative future reward (Veness et al.| and to improve exploratior\nby favoring actions that are likely to lead to novel states (Bellemare et al.]|2016}/Oh et al.] {2015}\n\u201cResearch conducted while interning at Microsoft.\nNate Kushman & Katja Hofmann\nSteines \u2014 bet eel\n\nnkushman@microsoft.com\nkat ja.hofmann@microsoft.col"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper we take a step towards using model-based techniques in environments\nwith high-dimensional visual state space when system dynamics and the reward\nstructure are both unknown and need to be learned, by demonstrating that it is\npossible to learn both jointly. Empirical evaluation on five Atari games demon-\nstrate accurate cumulative reward prediction of up to 200 frames. We consider\nthese positive results as opening up important directions for model-based RL in\ncomplex, initially unknown environments.\nresulting in substantially more data efficient learning compared to model-free approaches. When ar\naccurate model of the true environment dynamics and the true reward function is available, model.\n\nbased approaches, such as planning via Monte-Carlo tree search outperforn\nmodel-free state-of-the-art approaches\nOur empirical results on five Atari games demonstrate that our approach can successfully predict\ncumulative reward up to roughly 200 frames. We complement our quantitative results with a de-\ntailed error analysis by visualizing example predictions. Our results are the first to demonstrate the\nfeasibility of using a learned dynamics and reward model for accurate planning. We see this as a sig-\nnificant step towards data efficient RL in high-dimensional environments without prior knowledge."}, {"section_index": "3", "section_name": "2 RELATED WORK AND MOTIVATION", "section_text": "Two lines of research are related to the work presented in this paper: model-based RL and optimal\ncontrol theory. Model-based RL utilizes a given or learned model of some aspect of a task to, e.g.\n\nreduce data or exploration requirements (Bellemare et al.||2016 2015\nor deriving control policies in continuous\n\nOptimal control theory describes mathematical principles\naction spaces that maximize cumulative future reward in scenarios with known system dynamics\n\nand known reward structure (Bertsekas}{2007}/2005).\nThere has been recent interest in combining principles from optimal control theory and model-basec\nlearning in settings where no information on system dynamics is available a priori and instead has\nto be acquired from visual data (Finn et al.|/2016}/Wahlstr\u00e9m et al.| {2015} {Watter et al.}[2015). The\ngeneral idea behind these approaches is to learn a compressed latent representation of the visua.\nstate space from raw images through autoencoder networks (B: 9) and to utilize the ac-\nquired latent representation to infer system dynamics. System dynamics are then used to specify <\nplanning problem which can be solved by optimization techniques to derive optimal policies. |Wattet\nintroduce an approach for learning system dynamics from raw visual data by jointly\ntraining a variational autoencoder (Kingma & Welling] ) and a state pre\ndiction model that operates in the autoencoder\u2019s compressed latent state representation. A similat\napproach for jointly learning a compressed state representation and a predictive model is pursued by\nWahlstr\u00e9m et al.}(2015){Finn et al.|(2016) devise a sequential approach that first learns a latent state\nrepresentation from visual data and that subsequently exploits this latent representation to augment\na robot\u2019s initial state space describing joint angles and end-effector positions. The augmented state\nspace is then used to improve estimates of local system dynamics for planning.\n[he approaches presented above assume knowledge of the functional form of the true reward signa\nind are hence not directly applicable in settings like ALE (and many real-world settings) where th\neward function is initially unknown. Planning in such settings therefore necessitates learning bot!\nsystem dynamics and reward function in order to infer optimal behavioral policies. Recent wor!\n\u00bby (Oh et al.| (2015) introduced an approach for learning environment dynamics from pixel image\nind demonstrated that this enabled successful video frame prediction over up to 400 frames. I\nyur current paper, we extend this recent work to enable reward prediction as well by modifying th\n1etwork\u2019s architecture and training objective accordingly. The modification of the training objectiv\nyears a positive side effect: since our network must optimize a compound loss consisting of th\nfideo frame reconstruction loss and the reward loss, reward-relevant aspects in the video frames t\nvhich the reconstruction loss alone might be insensitive are explicitly captured by the optimizatiot\n\n\u00bbbjective. In the subsequent section, we elucidate the approach from|Oh et al.|(2015) as well as ou\nxtensions for reward prediction in more detail.\nA key open question is whether effective model-based RL is possible in complex settings where the\nenvironment dynamics and the reward function are initially unknown, and the agent has to acquire\nsuch knowledge through experience. In this paper, we take a step towards addressing this question\nby extending recent work on video frame prediction (Oh et al.||2015), which has been demonstrated\nto effectively learn system dynamics, to enable joint prediction of future states and rewards using\na single latent representation. We propose a network architecture and training procedure for joint\nstate and reward prediction, and evaluate our approach in the Arcade Learning Environment (ALE,\n\nBellemare et al\nInput\nframes\n\n|\n\nelu\n\nax8ax84 64x80x40\n\nFe\n\nin\n\nSacer Sacer\n\ned 22 peda\n\n| || |\n\nelu relU elu\n120120 saetox10\n\nEncoding\n\n024\n\n2048\n\nAction\n\n1\n+ \u00ab)>\n\n2048\n\nTransformation\n\nPredicted\nreward\n\nCe\n\noa ore\n\nped\n\nro] | Fe Sri\n\npe) ||\n\ntin | | raw relU\n1008 Gtxtox0\u2014\u2014\u2014saxz0x20\n\nDecoding and reward prediction\n\nDecony\n64, 6x8\npad 2.2\nstride 2\n\n|\n\nRelu\n\nPredicted\nnext frame\n\nDecony\n\n1, 6x6\npad 0.0\nstride 2\n\n|\n\nun\n\naxeaxea\nY reward\neye Ce]\nInput Predicted\nframes 7\u201d next frame\nPort Sacer Cane Gees Toe\nposod p-\u2014ppad 2 peda Pad D aR pad pl\nef] jf] fe] | + \u00ab)> } ||\nret et ret new] | tn tin | | raw ret ret tn\non84 extoxso \u2014\u2014tn20120-\u2014 dow s0z8 oss 1028 ead aconto watt BA\nEncoding\n\nTransformation Decoding and reward prediction\nFigure 1: Network architecture for joint video frame and reward prediction. The architecture com-\nprises three stages: an encoding stage mapping current input frames to some compressed latent\nrepresentation, a transformation stage integrating the current action into the latent representation\nthrough element-wise vector multiplication denoted by \u2019 x\u2019, and a final predictive stage for recon-\nstructing the frame of the next time step and the current reward. The network uses three different\ntypes of neuron layers (\u2019Conv\u2019 for convolutional, \u2019Deconv\u2019 for deconvolutional and \u2019Fc\u2019 for forward\nconnection) in combination with three different types of activation functions (\u2019ReLU\u2019, \u2019Softmax\u2019 and\n*Lin\u2019 for linear activations). The dimensional extend of individual layers is either depicted beneath\nor within layers. The network part coloured in red highlights the extension for reward prediction."}, {"section_index": "4", "section_name": "3.1 VIDEO FRAME PREDICTION", "section_text": "The video-frame-predictive architecture from ) comprises three information-\nprocessing stages: an encoding stage that maps input frames to some compressed latent represen-\ntation, a transformation stage that integrates the current action into the compressed latent represen-\ntation, and a decoding stage that maps the compressed latent representation to the predicted next\nframe\u2014see Figure[T] The initial encoding stage is a sequence of convolutional and forward oper-\nations that map the current frame history s;\u2014p4+1:,\u2014a three-dimensional tensor\u2014to a compressed\nfeature vector h;\"\u00b0. The transformation stage converts this compressed feature vector hf\" into an\naction-conditional representation h$** in vectorized form by integrating the current action a;. The\ncurrent action a; is represented as a one-hot vector with length varying from game to game since\nthere are at least 3 and at most 18 actions in ALE. The integration of the current action into the\ncompressed feature vector includes an element-wise vector multiplication\u2014depicted as \u2019 x\u2019 in Fig-\nure[I}\u2014with the particularity that the two neuron layers involved in this element-wise multiplication\nare the only layers in the entire network without bias parameters, see Section 3.2 in\nFinally, the decoding stage performs a series of forward and deconvolutional operation\nskiy et al. 2015} Zeiler et al. 2010) by mapping the action-conditional representation h{** of the\ncurrent frame history s;_;4+1:, and the current action a, to the predicted video frame s;41 of the\nnext time step t + 1. Note that this necessitates a reshape operation at the beginning of the decoding\ncascade in order to transform the vectorized hidden representation into a three-dimensional tensor.\nThe whole network uses linear and rectified linear units ) only. In all our experi-\nments, following DQN (Maih et al) POTS), the video frames processed by the network are 84 x 84\nn-sampled from\n\ngrey-scale images dow: the full-resolution 210 x 160 Atari RGB images from ALE.\nFollowing\u2019 ), the history frame time horizon h is set to 4.\nThe deep network proposed by ) for video frame prediction in Atari games aims at\nlearning a function that predicts the video frame s;+ at the next time step t + 1, given the current\nhistory of frames s;\u2014p+1., with time horizon h and the current action a; taken by the agent\u2014see\nSection Here, we extend this work to enable joint video frame and reward prediction such that\nthe network anticipates the current reward r,; as well\u2014see Sections[3.2]andJ3.3]"}, {"section_index": "5", "section_name": "3.2 REWARD PREDICTION", "section_text": "In this section we detail our proposed network architecture for joint state and reward prediction. Ou\nmodel assumes ternary rewards which result from reward clipping in line with\nOriginal game scores in ALE are integers that can vary significantly between different Atari g:\nand the corresponding original rewards are clipped to assume one of three values: \u20141 for negative\nrewards, 0 for no reward and 1 for positive rewards. Because of reward clipping, rewards can be\nrepresented as vectors r; in one-hot encoding of size 3.\nIn Figure]1} our extension of the video-frame-predictive architecture from|\nreward prediction is highlighted in red. We add an additional softmax layer to predict the curret\nreward r, with information contained in the action-conditional encoding h#**. The motivation bi\nhind this extension is twofold. First, our extension makes it possible to jointly train the netwot\nwith a compound objective that emphasizes both video frame reconstruction and reward predictio\nand thus encourages the network to not abstract away reward-relevant features to which the recot\nstruction loss alone might be insensitive. Second, this formulation facilitates the future use of tk\nmodel for reward prediction through virtual roll-outs in the compressed latent space, without th\ncomputational expensive necessity of reconstructing video frames explicitly\u2014note that this require\n\nanother \u201dshortcut\u201d predictive model to map from h\u00a2\u00a9 to hs\", .\nThe original training objective in onsists of a video frame reconstruction loss in\nterms of a squared loss function aimed at minimizing the quadratic 1?-norm of the difference vector\nbetween the ground truth image and its action-conditional reconstruction. We extend this training\nobjective to enable joint reward prediction. This results in a compound training loss consisting of\nthe original video frame reconstruction loss and a reward prediction loss given by the cross entropy\n(Simard et al.||2003) between the ground truth reward and the corresponding prediction:\nFollowing previous work (Oh et al.| 2015} Mnih et al. 2015). actions are chosen by the agent on\n\nevery fourth frame and are repeated on frames that were skipped. Skipped frames and repeated\nactions are hence not part of the data sets used to train and test the predictive network on, and\noriginal reward values are accumulated over four frames before clipping.\ni a(i)\n[[ste~al, Ist, sf\"), | ;\n\nvideo Fame reconstruction frame reconstruction loss\n\n3\nys rf. In p?, [!)\n\nl=1\n\nreward prediction loss\nvideo frame reconstruction loss a\nreward prediction loss\n\nwhere a, denotes the k-step look ahead frame prediction with target video frame st) , and Pr,\nJenotes the k-step look ahead probability values of the reward-predicting softmax layer\u2014depicted\n\nn red in Figurq1}\u2014with target reward vector r() ,: The parameter \\ > 0 controls the trade-off be-\nween video frame reconstruction and reward loss. The parameter T is a time horizon parameter that\nietermines how often a single trajectory sample 7 is unrolled into the future, and K determines the\nook ahead prediction horizon dictating how far the network predicts into the future by using its own\nvideo frame predicted output as input for the next time step. Following|Oh et al. (2015 and|Michal-\nki et al. (2014), we apply a curriculum learning (Bengio et al.| 2009) scheme by successively in-\ncreasing J< in the course of training such that the network initially learns to predict over a short time\n1orizon and becomes fine-tuned on longer-term predictions as training advances (see Section [A\n\u2018or details). The network parameters 6 are updated by stochastic gradient descent, derivatives of the\nraining objective w.r.t. 9 are computed with backpropagation through time (Wi"}, {"section_index": "6", "section_name": "4 RESULTS", "section_text": "Our quantitative evaluation examines whether our joint model of system dynamics and reward func\ntion results in a shared latent representation that enables accurate cumulative reward prediction. We\nassess cumulative reward prediction on test sets consisting of approximately 50,000 video frames pei\ngame, including actions and rewards. Each network is evaluated on 1,000 trajectories\u2014suitable t\u00a2\nanalyze up to 100-step ahead prediction\u2014drawn randomly from the test set. Look ahead predictior\nis measured in terms of the cumulative reward error which is the difference between ground truth\ncumulative reward and predicted cumulative reward. For each game, this results in 100 empirical dis:\ntributions over the cumulative reward error\u2014one distribution for each look ahead step\u2014consistins\nof 1,000 samples each (one for each trajectory). We compare our model predictions to a baseline\nmodel that samples rewards from the marginal reward distribution observed on the test set for eacl\nsame. Note that negative reward values are absent in the games investigated for this study.\nFigure [2] illustrates 20 of the 100 empirical cumulative reward error distributions in all games fot\nour network model in blue and for the baseline model in red (histograms, bottom), together with\nthe median and the 5 to 95 percentiles of the cumulative reward error over look ahead steps (top).\nAcross all games, we observe that our joint state and reward prediction model accurately predicts fu-\nture cumulative rewards at least 20 look ahead steps, and that it predicts future rewards substantially\nmore accurately than the baseline model. This is evidenced by cumulative reward error distributions\nthat maintain a unimodal form with mode zero and do not flatten out as quickly as the distributions\nfor the random-prediction baseline model. Best results are achieved in Freeway and Q*bert where\nthe probability of zero cumulative reward error at 51 look ahead steps is still around 80% and 60%\nrespectively\u2014see Figure [2| Note that 51 look ahead steps correspond to 204 frames because the\nunderlying DQN agent, collecting trajectory samples for training and testing our model, skipped\nevery fourth frame when choosing an action\u2014see Section [3.2] Lowest performance is obtained in\nSeaquest where the probability of zero cumulative reward error at 26 steps (104 frames) is around\n40% and begins to flatten out soon thereafter\u2014see Figure [2] Running the ALE emulator at a fre-\nquency of 60fps, 26 steps correspond to more than | second real-time game play because of frame\nskipping. Since our model is capable of predicting 26 steps ahead in less than 1 second, our model\nenables real-time planning and could be therefore utilized in an online fashion.\nWe now turn our attention to error analysis. While the look ahead step at which errors become\nprominent differs substantially from game to game, we find that overall our model underestimates\ncumulative reward. This can be seen in the asymmetry towards positive cumulative reward error\nvalues when inspecting the 5 to 95 percentile intervals in the first plot per each game in Figure\nWe identify a likely cause in (pseudo-)stochastic transitions inherent in these games. Considering\nSeaquest as our running example, objects such as divers and submarines can enter the scene ran-\ndomly from the right and from the left and at the same time have an essential impact on which\nrewards the agent can potentially collect. In the ground truth trajectories, the agent\u2019s actions are\nreactions to these objects. If the predicted future trajectory deviates from the ground truth, targeted\nactions such as shooting will miss their target, leading to underestimating true reward. We analyze\nthis effect in more detail in Sectio:\nAll our experiments were conducted in triplicate with different initial random seeds. Different initia\nrandom seeds did not have a significant impact on cumulative reward prediction in all games excep\nFreeway\u2014see Section for a detailed analysis. So far, we discussed results concerning rewar\nprediction only. In the appendix, we also evaluate the joint performance of reward and video fram\nprediction on the test set in terms of the optimization objective as in [Oh et al] (2015), where th\nauthors report successful video frame reconstruction up to approximately 100 steps (400 frames)\nand observe similar results\u2014see Sectio\nIn our evaluations, we investigate cumulative reward predictions quantitatively and qualitatively on\nfive different Atari games (Q*bert, Seaquest, Freeway, Ms Pacman and Space Invaders). The quan-\ntitative analysis comprises evaluating the cumulative reward prediction error\u2014see Section|4.1| The\nqualitative analysis comprises visualizations of example predictions in Seaquest\u2014see Section\nin the previous section, we identified stochasticity in state transitions as a likely cause for relativel\now performance in long-term cumulative reward prediction in games such as Seaquest. In Seaques\n\u00bbbjects may randomly enter a scene in a non-deterministic fashion. Errors in predicting these event\nesult in predicted possible futures that do not match actually observed future states, resulting i\nnaccurate reward predictions. Here, we support this hypothesis by visualizations in Seaquest illus\nrating joint video frame and reward prediction for a single network over 20 steps (80 frames)\u2014se\n\u201cigure [3] where ground truth video frames are compared to predicted video frames in terms of et\n\u2018or maps. Error maps emphasize the difference between ground truth and predicted frames throug!\nsquared error values between pixels in black or white depending on whether objects are absent o\noresent by mistake in the network\u2019s prediction. Actions, ground truth rewards and model-predicte\n\u2018ewards are shown between state transitions. Peculiarities in the prediction process are shown in rec\nIn step 2, the model predicts reward by mistake because the agent barely misses its target. Steps 4\nto 6 report how the model predicts reward correctly but is off by one time step. Steps 7 to 14 depic\nproblems caused by objects randomly entering the scene from the right which the model cannot\npredict. Steps 26 to 30 show how the model has problems to predict rewards at steps 26 and 28 as\nthese rewards are attached to objects the model failed to notice entering the scene earlier."}, {"section_index": "7", "section_name": "5) CONCLUSION AND FUTURE WORK", "section_text": "[n this paper, we extended recent work on video frame prediction (Oh et al.|/2015) in Atari games\n\nto enable reward prediction. Our approach can be used to jointly predict video frames and cumula-\ntive rewards up to a horizon of approximately 200 frames in five different games (Q*bert, Seaquest,\nFreeway, Ms Pacman and Space Invaders). We achieved best results in Freeway and Q*bert where\nthe probability of zero cumulative reward error after 200 frames is still around 80% and 60% respec-\ntively, and worst results in Seaquest where the probability of zero cumulative reward error after 100\nframes is around 40%. Our study fits into the general line of research using autoencoder networks\n\n>t al.||2007), and extends this line of research by showing that Saad networks are capable of\nlearning a combined representation for system dynamics and the reward function in reinforcement\nlearning settings with high-dimensional visual state spaces\u2014a first step towards applying model-\nbased techniques for planning in environments where the reward function is not initially known.\nPa \u2018nett \u201ctet eet \u2014 Daa reine \u2014 ee \u2014 eee\ngration of model-based and model-free approaches for effective interactive learning and planning\nin complex environments. Directions for achieving this long-standing challenge include the Dyna\nmethod which uses a predictive model to artificially augment expensive training data,\nand has been shown to lead to substantial reductions in data requirements in tabular RL approaches.\nAlternatively, the model could be could be utilized for planning via Monte-Carlo tree search (\net al} 2014} Browne et al. 2012). We hypothesize that such an approach would be particularly\nbeneficial in multi-task or life-long learning scenarios where the reward function changes but the\nenvironment dynamics are stationary. Testing this hypothesis requires a flexible learning framework\nwhere the reward function and the artificial environment can be changed by the experimenter in\nan arbitrary fashion, which is not possible in ALE where the environment and the reward function\nare fixed per game. A learning environment providing such a flexibility is the recently released\nMalm\u00e9 platform for Minecraft (Johnson et al.||2016) where researchers can create user-defined en-\nvironments and tasks in order to evaluate the performance of artificial agents. In the shorter-term,\nwe envision improving the prediction performance of our network by regularization methods such\nas dropout and max norm regularization 2014)\u2014a state-of-the-art regularizer in\nsupervised learning\u2014and by modifying the optimization objective to enforce similarity between\nhidden encodings in multi-step ahead prediction and one-step ahead prediction\u2014see {Watter et al.\nFinally, extensions of our model to non-deterministic state transitions through dropout and\nvariational autoencoder schemes 2014 is a promising\ndirection to alleviate the limitations highlighted in Section |4.2} paving the way for models that\nadequately predict and reason over alternative possible future trajectories.\nQ*bert Seaquest\n5 anton mean\no trol 595\ns [Eaton re\ni\n=\ng 5\nvu\n2\n5\n2 os\n3\n\u00a3 \u2014\na\ns) 7% 5 5 = at 5 5 w ca To\nLook ahead steps Look ahead steps\nas nose\n[= sono\nfa\n=} Fo % a = a % a rs\n3\n2\na\n2 % a e = ra = w En\nCumulative reward error Cumulative reward error\nFreeway Ms Pacman Space Invaders\n\u201c = \u2018model median\nado meson\nmod 5 35per\n, | rae tse\n\nLook ahead steps\n\nLook ahead steps\n\nE\n\nCumulative reward error\n\nCumulative reward error\n\nCumulative reward err\n\nure 2: Cumulative reward error over look ahead steps in five different Atari games. T!\nCumulative reward error\n\nDistributions\n\nFreeway\n\nMs Pacman\n\nSpace Invaders\n\noa\n\"Look ahead steps. i Look ahead steps. \u201c Look ahead steps .\n\nCumulative reward error\n\nCumulative reward error\n\nCumulative reward error\nFigure 2: Cumulative reward error over look ahead steps in five different Atari games. There are\ntwo plots for each game. The top plot per game shows how the median and the 5 to 95 percentiles\nof the cumulative reward error evolve over look ahead steps for both our model (in blue) and a base-\nline model that samples rewards from the marginal reward distribution of the test set (in red). Each\nvertical slice of this concise representation corresponds to a single empirical distribution over the\ncumulative reward error. We depict these for every fifth look ahead step in the compound plots be-\nlow for both models. These empirical error distributions demonstrate successful cumulative reward\nprediction over at least 20 steps (80 frames) in all five games as evidenced by their zero-centered\nand unimodal shape in the first column of each compound plot per game.\nFigure 3: Example predictions in Seaquest. Ground truth video frames, model predictions and error\nmaps emphasizing differences between ground truth and predicted frames\u2014in form of the squared\nerror between pixel values\u2014are compared column-wise. Error maps highlight objects in black or\nwhite respectively depending on whether these objects are absent by mistake or present by mistake\nin the model\u2019s prediction. Actions taken by the agent as well as ground truth rewards (\u2019rew\u2019) and\nreward predictions (\u2019pred\u2019) are shown below video and error frames. Peculiarities in the prediction\nprocess are marked in red. The figure demonstrates how our predictive model fails to anticipate\nobjects that randomly enter the scene from the right and rewards associated to these objects.\nPo MUU CP\n\n10\n\nPrevniiel\n\nSiw reap\n\n= _-\n\na\nleft + fire, rew=0, pred=\n\n_ =\n= \u2014\u2014\u2014\nOa eo\n\nCase Ce\nright\u20ac@few=0, pred=1\n= Fo!\n\n= ee\nup + left + fire, rew=0, pred=\ned Pe\n\n~Go ee\n\naa Ss Ce\nleft + firegf\u00e9w=0, pred=T\n\nee \u2014\n\u2014\u2014\n\u2018oa \u2014\n\ni ee eee\ndown + right\u20acew=1, pred=O\n\n= -\n\nLZ 6\ndown + right, rew=0, pred=\nSm Ee\ncee Cee\nleft + fire, rew=0, pred=\n\n= =\n\noa oa\ndown + right + fire, rew=0, pred=\nPs \u2014\n\n\u2014\u2014\u2014 a\nleft + fire, rew=0, pred=0\n\ner sr\"\n_\u2014\n- ~\n\n\u2014\u2014\u2014 a\nup + fire, rew=0, pred=\n\nwere\n\nNIV REM Pe EME OY\n\n\u2014_\u2014\u2014_\u2014\n- =\n\n\u2014\u2014 a |\ndown + right + fire, rew=0, pre\n= ars\n\nae oe\ndown + right + fire, rew=0, pre\n\n- =\n\n4\n\u2014\u2014\u2014 TT ae\ndown + right + fire, rew=0, pre\n\n* . oe\n\n\u2014\u2014\u2014 | [=i sen]\ndown + right + fire, rew=0, pret\n\na\n\n\u2014\u2014- a |\nup + left,\u20ac@w=1, pred=0\n\na a\n\nmed is. Sanaa)\n\ndown + right + fire, rew=0, pred=0\nwT 2\"\n\n\u2014~ . \u00ab\n\n\u2014\u2014\u2014\u2014 a 4\nup + left\u00e9w=1, pred=0\n\nLaw Zea\n\na 4\ndown, rew=0, pred=0\n\nas a= =\n\n(=\n=\nup + fire, rew=0, pred=\nD P Bertsekas. Dynamic programming & optimal control, volume 1. Athena Scientific, 2005.\nD P Bertsekas. Dynamic programming & optimal control, volume 2. Athena Scientific, 2007\nS Samothrakis, and S Colton. A survey of monte carlo tree search methods. JEEE Transactions\non Computational Intelligence and Al in Games, 4(1):1-49, 2012.\n\nA Dosovitskiy, J T Springenberg, and T Brox. Learning to generate chairs with convolutional neural\nnetworks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,\n2015.\n\nC Finn, X Y Tan, Y Duan, T Darrell, S Levine, and P Abbeel. Deep spatial autoencoders for\nvisuomotor learning. In Proceedings of the IEEE International Conference on Robotics and Au-\ntomation, 2016.\n\nX Glorot and Y Bengio. Understanding the difficulty of training deep feedforward neural networks.\nIn Proceedings of the International Conference on Artificial Intelligence and Statistics, 2010.\n\nX Glorot, A Bordes, and Y Bengio. Deep sparse rectifier neural networks. In Proceedings of the\nInternational Conference on Artificial Intelligence and Statistics, 2011.\n\nR Goroshin, M Mathieu, and Y LeCun. Learning to linearize under uncertainty. Advances in Neural\nK Gregor, I Danihelka, A Graves, D J Rezende, and D Wierstra. DRAW: a recurrent neural network\nfor image generation. In Proceedings of the International Conference on Machine Learning,\n2015.\n\nX Guo, S Singh, H Lee, R Lewis, and X Wang. Deep learning for real-time Atari game play using\noffline Monte-Carlo tree search planning. In Advances in Neural Information Processing Systems,\n2014.\n\nG E Hinton, A Krizhevsky, and S D Wang. Transforming auto-encoders. In Proceedings of the\nInternational Conference on Artificial Neural Networks, 2011.\nB F Skinner. The behavior of organisms: an experimental analysis. Appleton-Century-Crofts, 193\u00a7\nR S Sutton and A G Barto. Reinforcement learning: an introduction. MIT Press, 1998.\nW H Thorpe. The origins and rise of ethology. Heinemann Educational Books, 1979\nS Lange, M Riedmiller, and A Voigtlander. Autonomous reinforcement learning on raw visual input\ndata in a real world application. In Proceedings of the International Joint Conference on Neural\nNetworks, 2012.\n\nV Michalski, R Memisevic, and K Konda. Modeling deep temporal dependencies with recurrent\ngrammar cells. In Advances in Neural Information Processing Systems, 2014.\n\nV Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller,\nA K Fidjeland, G Ostrovski, S Petersen, C Beattie, A Sadik, I Antonoglou, H King, D Kumaran,\nD Wierstra, S Legg, and D Hassabis. Human-level control through deep reinforcement learning.\nNature, 518(7540):529-533, 2015.\n\nJ Oh, X Guo, H Lee, R Lewis, and S Singh. Action-conditional video prediction using deep networks\nin Atari games. In Advances in Neural Information Processing Systems, 2015.\n\nR Pascanu, T Mikolov, and Y Bengio. On the difficulty of training recurrent neural networks. In\nProceedings of the International Conference on Machine Learning, 2013.\n\nM Ranzato, F J Huang, Y-L Boureau, and Y LeCun. Unsupervised learning of invariant feature\nhierarchies with applications to object recognition. In Proceedings of the IEEE Conference on\nComputer Vision and Pattern Recognition, 2007.\n\nD J Rezende, S Mohamed, and D Wierstra. Stochastic backpropagation and approximate inference\nin deep generative models. In Proceedings of the International Conference on Machine Learning,\n2014.\n\nNY OC... mom\nBF skinner. [he behavior of organisms: an experimental analysis. Appleton-Century-Crofts, 19506.\n\nN Srivastava, G E Hinton, A Krizhevsky, I Sutskever, and R Salakhutdinov. Dropout : a simple\nway to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:\n1929-1958, 2014.\n\nN Srivastava, E Mansimov, and R Salakhutdinov. Unsupervised learning of video representations\nusing LSTMs. In Proceedings of the International Conference on Machine Learning, 2015.\n\nR S Sutton. Integrated architectures for learning, planning, and reacting based on approximating\ndynamic programming. In Proceedings of the International Conference on Machine Learning,\n\n1990.\n\nR S Sutton and A G Barto. Reinforcement learning: an introduction. MIT Press, 1998."}, {"section_index": "8", "section_name": "A.1 TRAINING DETAILS", "section_text": "In our experiments, we modified the reward prediction loss slightly in order to prevent exploding\ngradient values by replacing the term \u2014 In p with a first-order Taylor approximation for p-values\nsmaller than e~!\u00b0\u2014a similar technique is used in DQN to improve the sta-\nbility of the optimization algorithm. To identify optimal values for the reward weight A, we per-\nformed initial experiments on Ms Pacman without applying the aforementioned curriculum learning\nscheme instead using a fixed look ahead parameter A = 1. We evaluated the effect of different\nA-values \u20ac {0.1, 1, 10, 100} on the training objective and identified \\ = 1 for conducting further\nexperiments\u2014see Section[A.2] After identifying an optimal reward weight, we conducted additional\ninitial experiments without curriculum learning with fixed look ahead parameter KK = 1 on all of the\nfive different Atari games used in this paper. We observed periodic oscillations in the reward predic-\ntion loss of the training objective in Seaquest, which was fixed by adding gradient clipping (Pascanu\nwith threshold parameter | to our optimization procedure\u2014experiments investigating\nthe effect of gradient clipping in Seaquest are reported in Section The fine-tuning effect of\ncurriculum learning on the training objective in our final experiments is shown in Section|A.4]for all\nof the five analysed Atari games."}, {"section_index": "9", "section_name": "\\.3. EFFECT OF GRADIENT CLIPPING IN SEAQUEST", "section_text": "After identifying an optimal value for the reward weight, see Section[A.2} we observed oscillation\nin the reward loss of the training objective in Seaquest\u2014see first column in Figure [5}Lwhich wa\nsolved by adding gradient clipping to our optimization procedure\u2014see second and third column it\nFigure[5| We tested two different values for the gradient clipping threshold (5 and 1) both of whic!\nworked, but for a value of 1 the oscillation vanished completely.\nWe performed all our experiments in Python with Chainer and adhered to the instructions in/Oh et al.\nas close as possible. Trajectory samples for learning the network parameters were obtaine\nfrom a previously trained DQN agent according to Min et al] (2015). The dataset for training\ncomprised around 500, 000 video frames per game in addition to actions chosen by the DQN agent\nand rewards collected during game play. Video frames used as network input were 84 x 84 grey-scale\nimages with pixel values between 0 and 255 down-sampled from the full-resolution 210 x 160 ALE\nRGB images. We applied a further preprocessing step by dividing each pixel by 255 and subtracting\nmean pixel values from each image leading to final pixel values \u20ac [\u20141;1]. A detailed network\narchitecture is shown in Figure }1}in the main paper. All weights in the network were initialized\naccording to{Glorot & Bengio|(2010) except for those two layers that participate in the element-wise\nmultiplication in Figure[I} the weights of the action-processing layer were initialized uniformly in\nthe range [\u20140.1; 0.1] and the weights of the layer receiving the latent encoding of the input video\nframes were initialized uniformly in the range [\u20141; 1]. Training was performed for 1,500,000\nminibatch iterations with a curriculum learning scheme increasing the look ahead parameter\nevery 500, 000 iterations from 1 to 3 to 5. When increasing the look ahead parameter x for the first\ntime after 500, 000 iterations, the minibatch size J was also altered from 32 to 8 as was the learning\nrate for parameter updates from 10~* to 10~>. Throughout the entire curriculum scheme, the time\nhorizon parameter determining the number of times a single trajectory is unrolled into the future\nwas T' = 4. The optimizer for updating weights was Adam with gradient\nmomentum 0.9, squared gradient momentum 0.95 and epsilon parameter 10~>. In evaluation mode,\nnetwork outputs were clipped to [\u20141; 1] so that strong activations could not accumulate over roll-out\ntime in the network.\nTo identify optimal values for the reward weight A, we conducted initial experiments in Ms Pacman\nwithout curriculum learning and a fixed look ahead horizon K = 1. We tested four different A-\nvalues \u20ac {0.1,1, 10,100} and investigated how the frame reconstruction loss and the reward loss\nof the training objective evolve over minibatch iterations\u2014see Figure/4] Best results were obtained\nfor \\ = 1 and for \\ = 10, whereas values of X = 0.1 and A = 100 lead to significantly slower\nconvergence and worse overall training performance respectively.\nining loss\n\nTra\n\nining loss\n\nTra\n\nReconstruction\n\nReward\n\nReconstruction\n\nReward\n\noa eit oa ml\n' ened be ES=H\noa oa\noa 0\n\n10 19,\nExperiment = Exeter?\n\nos) Experment 2 08) Expement2\no6| oe = _Etperment 3\n04] 04\n02| 02\n00)\n\noie? 200000400000 600000 00000 1000000 1200000 1400000 YY 700000 490000 G000HR 600000 1000000 1200000 1400000\n\n0.008| 0.008;\n\n0.006| 0.006;\n\n0.004] 0.004\n\n0.002| 0.002,\n\n0.000] 0.000,\n\nReward weight = 0.1\n\nReward weight = 1\n\n\u2018200000 490000 600000 800000 1000000 1200000 1400000,\n\nMinibatch iterations\n\nReward weight = 10\n\n' 200000490000 600000 600000 1000000 1200000 1400000,\n\nMinibatch iterations\n\nReward weight = 100\n\n' 200000\u201d 4090000 600000 800000 1000000 1200000 1400000,\n\nMinibatch iterations\n\n\"0200900 00800 60000 800000 1000000 1200000 1400000\n\nMinibatch iterations\nFigure 4: Effect of reward weight on training loss in Ms Pacman. Each of the four panels depicts\none experiment with a different reward weight \\. Each panel shows how the training loss evolves\nover minibatch iterations in terms of two subplots reporting video frame reconstruction and reward\nloss respectively. Each experiment was conducted three times with different initial random seeds\ndepicted in blue, green and red. Graphs were smoothed with an exponential window of size 1000.\nTraining loss\n\nReconstruction Compound\n\nReward\n\nNo gradient clipping\n\nGradient clipping, threshold = 5\n\nGradient clipping, threshold = 1\n\n03 ae 2 a 3 a\n\nos os on\n\n| Sco aa| Nee 0a| Sa\noon oom oom\n\nMinibatch iterations\n\nMinibatch iterations\n\nMinibatch iterations\nFigure 5: Effect of gradient clipping on training loss in Seaquest. The three panels compare ex-\nperiments with no reward clipping to those with reward clipping using the threshold values 5 and 1\nrespectively. Subplots within each panel are similar to those in Figure{4]but display in the first row\nthe evolution of the compound training loss in addition to the frame reconstruction and reward loss."}, {"section_index": "10", "section_name": "A.4. EFFECT OF CURRICULUM LEARNING", "section_text": "In our final experiments with curriculum learning, the networks were trained for 1,500,000 mini.\nbatch iterations in total but the look ahead parameter kK was gradually increased every 500, 00(\niterations from | to 3 to 5. The networks were hence initially trained on one-step ahead predictior\nonly and later on fine-tuned on further-step ahead prediction. Figure [6] shows how the training ob-\njective evolves over iterations. The characteristic bumps\u201d in the training objective every 500, 00(\niterations as training evolves demonstrate improvements in long-term predictions in all games ex:\ncept Freeway where the training objective assumed already very low values within the first 500, 00(\niterations and might have been therefore insensitive to further fine-tuning by curriculum learning.\nv.10\n0.008\n0.006\n0.004\n0.002\n0.009,\n\n0\n\n200000 400000 600000 800000 1000000 1200000 1400000\n1,0 200000400000 600000 800000 1000000 1200000 1400000\n\n0.010)\n0.008)\n0.006\n0.004)\n\n0.002)\n\n\u00b0.00\u00b05\u2014oas00 400000 60000 Bo0000 1000000 1200000 1400000\nQ*bert Seaquest\n\n\u20ac | oa] Ke\na & a oa|\na \u2018\u00b0\u00b0 zo0000 \u00ab0000 600000 #00000 1900000 1200000 1400000 \u2018\u00b0\u00b0 za0000 400000 600000 #90000 1900000 1200000 1400000\nPa oh\n(\u00a9 E22 \u2014aiom_sipor coon wooo Tuono Toone Toa 4) zone OND _GD_SDnTToNOoT DOT HORN\n-F aco coo\np too aco\n& 0.004} 0.004\nE ooo 0.002\nMinibatch iterations Minibatch iterations\nFreeway Ms Pacman Space Invaders\nBa Beenie a = Eien cc Seo\n\u00a7 02] 02 0\nme\nB oa cone coe\n\nMinibatch iterations Minibatch iterations Minibatch iterations\nFigure 6: Effect of curriculum learning on five different Atari games. Each panel corresponds to <\ndifferent game, individual panels are structured in the same way as are those in Figure[5]"}, {"section_index": "11", "section_name": "A.5 EFFECT OF RANDOM SEEDS", "section_text": "In order to investigate this reward overestimation in Freeway further, we analyse visualizations of\njoint video frame and reward prediction for this particular seed (similar in style to Figure [3] from\nSection|4.2]in the main paper). The results are shown in Figure|8]where a peculiar situation occurs\nafter 31 predicted look ahead steps. In Freeway, the agent\u2019s job is to cross a busy road from the\nbottom to the top without bumping into a car in order to receive reward. If the agent bumps into\na car, the agent is propelled downwards further away from the reward-yielding top. This propelled\ndownwards movement happens even when the agent tries to move upwards. Exactly that kind of\nsituation is depicted at the beginning of Figure [8] and occurs for this particular prediction after 31\nsteps. Our predictive model is however not able to correctly predict the aforementioned downwards\nmovement caused by the agent hitting the car, which is highlighted in red throughout steps 31 to 35\ndocumenting an increasing gap between ground truth and predicted agent position as the propelled\ndownwards movement of the ground truth agent continues. In the course of further prediction, the\nnetwork model assumes the agent to reach the reward-yielding top side of the road way too early\nwhich results in a sequence of erroneous positive reward predictions throughout steps 41 to 50,\nand as a side effect seemingly that the predictive model loses track of other objects in the scene.\nConcluding, this finding may serve as a possible explanation for cumulative reward overestimation\nfor that particular experiment in Freeway.\nWe conducted three different experiments per game with different initial random seeds. The effect\nof different initial random seeds on the cumulative reward error is summarized in Figure [7] which\nreports how the median and the 5 to 95 percentiles of the cumulative reward error evolve over look\nahead steps in the different experiments per game. Note that the results of the first column in Figure[7|\nare shown in Figure[2|from the main paper together with a more detailed analysis depicting empirical\ncumulative reward error distributions for some look ahead steps. The random initial seed does not\nseem to have a significant impact on the cumulative reward prediction except for Freeway where the\nnetwork in the third experiment starts to considerably overestimate cumulative rewards at around 30\nto 40 look ahead steps.\nExperiment 2 Experiment 3\n\nExperiment 1\n\naT i i >\nthea! >\n\nfe iF ESES P i\"\n\\\n\ne is is 2 2\n\ndoua piemad aaijejnwuny\n2040\n\ndoa prema aaiye;nuND,\nysanbeas\n\nJoa puemas aanejnund\nAemaai4\n\nJoua piemas arijejnwnD\nuewded SW\n\nJoa puemas aaejnund\nssapeAu| adeds\n\nLook ahead steps Look ahead steps\n\nLook ahead steps\nFigure 7: Effect of different initial random seeds on cumulative reward error. The plots show how\nthe cumulative reward error evolves over look ahead steps in terms of the median and the 5 tc\n95 percentiles for our network model (blue) as well as the baseline model (red) in each experiment\nEach row refers to a different game, each column refers to a different experiment per game initializec\nwith a different random seed. The first column of this figure is presented in Figure [2] of the mair\npaper explaining the results in more detail by additionally illustrating empirical distributions ovet\nthe cumulative reward error for some look ahead steps.\nSteps Ground truth Prediction\n\nError map\n\nSteps\n\nGround truth Prediction Error map\n\n\u00a9 Gr)\n31 41\n\n32\n\n33\n\n34\n\n35\n\n36\n\n37\n\n38\n\n39\n\n40\n\nup, rew=0, pred=0\n\nnoop, rew=0, pred=0\n\nup, rew=0, pred=0\n\nup, rew=0, pred=0\nup, rew=0, pred=0\n\nnoop, rew=0, ii\n\nup, rew=0, pred=0\n\nup, rew=0, pred=\n\nup, rew=0, pred=0\n\nup. rew=0. pred=0\n\n43\n\n44\n\n45\n\n46\n\n47\n\n48\n\n49\n\n50\n\n| =\n\n/ up, rew=0, pred=0\n\nup<few=0, pred=1\nup, rew=0, pred=0\n\nupgfew=0, pred=1\n\nup, rew=0, |\n\nup, rew=0, |\n\nupefew=0, pred=T\n\nup\u00abtew=0, pred=l\n\nup@few=0. npred=T\nFigure 8: Example predictions in Freeway over 20 steps. The figure is similar in nature to Figure\nfrom the main paper with the only difference that predictions are depicted from time step 31 onwards."}, {"section_index": "12", "section_name": "A.6 LOSS ON TEST SET", "section_text": "In the main paper, our analysis focuses on evaluating how well our model serves the purpose of\ncumulative reward prediction. Here, we evaluate network performance in terms of both the video\nframe reconstruction loss as well as the reward prediction loss on the test set following the analysis\nconducted in : 015}. For each game, we sample 300 minibatches of size J = 50 from\nthe underlying test set and compute the test loss over IY = 100 look ahead steps with the formula\npresented in the main paper in Section|3.3]used for learning network parameters, but without aver-\naging over look ahead steps because we aim to illustrate the test loss as a function of look ahead\nsteps\u2014statistics of this analysis are plotted in Figure)9]\nBest overall test loss is achieved in Freeway and for initial look ahead steps (up to roughly betweer\n40 and 60 steps) in Q*bert, which is in accordance with results for cumulative reward predictior\nfrom the main paper. Also in line with results from the main paper is the finding that the reward\nloss on the test set is worse in Seaquest, Ms Pacman and Space Invaders when compared to Q*bert\n(up to approximately 40 steps) and Freeway. Worst video frame reconstruction loss is observed\nfor Space Invaders in compliance with 2015) where the authors report that there are\nobjects in the scene moving at a period of 9 time steps which is hard to predict by a network only\ntaking the last 4 frames from the last 4 steps as input for future predictions. At first sight, it might\nseem a bit surprising that the reward prediction loss in Space Invaders is significantly lower thar\nin Seaquest and Ms Pacman for long-term ahead prediction despite the higher frame reconstructior\nloss in Space Invaders. A possible explanation for this paradox might be the frequency at which\nrewards are collected\u2014this frequency is significantly higher in Seaquest and Ms Pacman than ir\nSpace Invaders. A reward prediction model with bias towards zero rewards\u2014as indicated by the\nmain results in the paper\u2014might therefore err less often in absolute terms when rewards are collected\nat a lower frequency and may hence achieve lower overall reward reconstruction loss.\nTest loss in Q*bert\n\nTest loss in Seaquest\n\nTest loss in Freeway\n\nTest loss in Space Invaders Test loss in Ms Pacman\n\nCompound loss Reconstruction loss Reward loss\n\nLook ahead steps Look ahead steps Look ahead steps\nFigure 9: Loss on test set over look ahead steps. Each row reports the loss on the test set over 100\nlook ahead steps for a different game. The first column illustrates the compound loss consisting of\nthe video frame reconstruction loss (second column) and the reward prediction loss (third column).\nThe loss on the test set is computed according to|Oh et al.| (2015) similar to the training loss for\nlearning network parameters, however with a different look ahead parameter K = 100 and a differ-\nent minibatch size J = 50, and without averaging over look ahead steps since we aim to plot the test\nloss as a function of look ahead steps. For each game, the test loss is computed for 300 minibatches\nresulting in an empirical distribution with 300 loss values per look ahead step. The figure shows the\nmean (in green), the median (in red), the 5 to 95 percentiles (in shaded blue) as well as minimum\nand maximum elements (in black dashed lines) of these empirical distributions."}]
BJbD_Pqlg
[{"section_index": "0", "section_name": "HUMAN PERCEPTION IN COMPUTER VISION /\nCONFERENCE SUBMISSIONS", "section_text": "Ron Dekel *\nDepartment of Neurobiology\nWeizmann Institute of Science\nRehovot, PA 7610001, Israel\nComputer vision has made remarkable progress in recent years. Deep neural\nnetwork (DNN) models optimized to identify objects in images exhibit unprece-\ndented task-trained accuracy and, remarkably, some generalization ability: new\nvisual problems can now be solved more easily based on previous learning. Bio-\nlogical vision (learned in life and through evolution) is also accurate and general-\npurpose. Is it possible that these different learning regimes converge to similar\nproblem-dependent optimal computations? We therefore asked whether the hu-\nman system-level computation of visual perception has DNN correlates and con-\nsidered several anecdotal test cases. We found that perceptual sensitivity to image\nchanges has DNN mid-computation correlates, while sensitivity to segmentation,\ncrowding and shape has DNN end-computation correlates. Our results quantify\nthe applicability of using DNN computation to estimate perceptual loss, and are\nconsistent with the fascinating theoretical view that properties of human percep-\ntion are a consequence of architecture-independent visual learning.\nConsidering the learned computation of ImageNet-trained DNNs, we find\ne Large computation changes for perceptually salient image changes (Figure 1).\ne Gestalt: segmentation, crowding, and shape interactions in computation (Figure 2).\ne Contrast constancy: bandpass transduction in first layers is later corrected (Figure 3).\nThese properties are reminiscent of human perception, perhaps because learned general-purpos:\nclassifiers (human and DNN) tend to converge.\nDeep neural networks (DNNs) are a class of computer learning algorithms that have become widely\nused in recent years (LeCun et al., 2015). By training with millions of examples, such models\nachieve unparalleled degrees of task-trained accuracy (Krizhevsky et al., 2012). This is not unprece-\ndented on its own - steady progress has been made in computer vision for decades, and to some\ndegree current designs are just scaled versions of long-known principles (Lecun et al., 1998). In pre-\nvious models, however, only the design is general-purpose, while learning is mostly specific to the\ncontext of a trained task. Interestingly, for current DNNs trained to solve a large-scale image recog-\nnition problem (Russakovsky et al., 2014), the learned computation is useful as a building block for\ndrastically different and untrained visual problems (Huh et al., 2016; Yosinski et al., 2014).\nFor example, orientation- and frequency-selective features (Gabor patches) can be considered\ngeneral-purpose visual computations. Such features are routinely discovered by DNNs (Krizhevsky\net al., 2012; Zeiler & Fergus, 2013), by other learning algorithms (Hinton & Salakhutdinov, 2006;\nhttps://sites.google.com/site/rondekelhomepage/"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "As an extension, general-purpose computations are perhaps of universal use. For example, a dimen-\nsionality reduction transformation that optimally preserves recognition-relevant information may\nconstitute an ideal computation for both DNN and animal. More formally, different learning algo-\nrithms with different physical implementations may converge to the same computation when simila1\n(or sufficiently general) problems are solved near-optimally. Following this line of reasoning, DNN\nmodels with good general-purpose computations may be computationally similar to biological vi-\nsual systems, even more so than less accurate and less general biologically plausible simulations\n(Kriegeskorte, 2015; Yamins & DiCarlo, 2016).\nHere, we quantify the similarity between human visual perception, as measured by psychophys-\nical experiments, and individual computational stages (layers) in feed-forward DNNs trained on\na large-scale image recognition problem (ImageNet LSVRC). Comparison is achieved by feeding\nthe experimental image stimuli to the trained DNN and comparing a DNN metric (mean mutual\ninformation or mean absolute change) to perceptual data. The use of reduced (simplified and typi-\ncally non-natural) stimuli ensures identical inherent task difficulty across compared categories and\nprevents confounding of categorization consistency with measured similarity. Perception, a system-\nlevel computation, may be influenced less by the architectural discrepancy (biology vs. DNN) than\nare neural recordings.\nFrom a perceptual perspective, an image change of fixed size has different saliency depending on im:\nage context (Polat & Sagi, 1993). To investigate whether the computation in trained DNNs exhibit:\nsimilar contextual modulation, we used the Local Image Masking Database (Alam et al., 2014), ir\nwhich 1080 partially-overlapping images were subjected to different levels of the same random ad.\nditive noise perturbation, and for each image, a psychophysical experiment determined the thresholc\nnoise level at which the added-noise image is discriminated from two noiseless copies at 75% (Fig:\nure la). Threshold is the objective function that is compared with an Lj-distance correlate in the\nDNN representation. The scale of measured threshold was:\nstd (noise\n20 - logig (\u201cow on) ;\nwhere std (noise) is the standard deviation of the additive noise, and T is the mean image pixel\nvalue calculated over the region where the noise is added (i.e. image center).\nLee et al., 2008; 2009; Olshausen & Field, 1997), and are extensively hard-coded in computer vision\n(Jain & Farrokhnia, 1991). Furthermore, a similar computation is believed to underlie the spatial re-\nsponse properties of visual neurons of diverse animal phyla (Carandini et al., 2005; DeAngelis et al.,\n1995; Hubel & Wiesel, 1968; Seelig & Jayaraman, 2013), and is evident in human visual perception\n(Campbell & Robson, 1968; Fogel & Sagi, 1989; Neri et al., 1999). This diversity culminates in sat-\nisfying theoretical arguments as to why Gabor-like features are so useful in general-purpose vision\n(Olshausen, 1996; Olshausen & Field, 1997).\nRelated work seems to be consistent with computation convergence. First, different DNN training\n\u2018egimes seem to converge to a similar learned computation (Li et al., 2015; Zhou et al., 2014). Sec-\nond, image representation may be similar in trained DNN and in biological visual systems. Tha\ns, when the same images are processed by DNN and by humans or monkeys, the final DNN com.\ndutation stages are strong predictors of human fMRI and monkey electrophysiology data collectec\nrom visual areas V4 and IT (Cadieu et al., 2014; Khaligh-Razavi & Kriegeskorte, 2014; Yamin:\nt al., 2014). Furthermore, more accurate DNN models exhibit stronger predictive power (Cadiet\nt al., 2014; Dubey & Agarwal, 2016; Yamins et al., 2014), and the final DNN computation stage is\nven a strong predictor of human-perceived shape discrimination (Kubilius et al., 2016). However\nsome caution is perhaps unavoidable, since measured similarity may be confounded with catego.\n\u2018ization consistency, view-invariance resilience, or similarity in the inherent difficulty of the task:\nindergoing comparison. A complementary approach is to consider images that were produced by\nyptimizing trained DNN-based perceptual metrics (Gatys et al., 2015a;b; Johnson et al., 2016; Ledis\n>t al., 2016), which perhaps yields undeniable evidence of non-trivial computational similarity, al-\nhough a more objective approach may be warranted.\n-60 -40 -20 i?)\nPerceptual threshold\n(dB)\n\n-40 dB -30 dB -20 dB -10 dB 0dB\n\nc d Perceptual threshold\n<45dB -40dB -30dB -20dB >-15dB\n\nOvershoot\n\nComputational stage\n\nUndershoot\nFigure 1: Predicting perturbation thresholds. a, For a fixed image perturbation, perceptual detection\nthreshold (visualized by red arrow) depends on image context. b, Measured perceptual threshold is\ncorrelated with the average L; change in DNN computation due to image perturbation (for DNN\nmodel VGG-19, image scale=100%). \u00a2, Explained variability (R?) of perceptual threshold date\nwhen L, change is based on isolated computational layers for different input image scales. Same\nVGG-19 model as in (b). X-axis labels: data refers to raw image pixel data, cConv*_1 and fc_* are\nthe before-ReLU output of a convolution and a fully-connected operation, respectively, and prob\nis the output class label probabilities vector. d, Example images for whcih predicted threshold in b\nis much higher than perceptually measured (\u2019\u201dOvershoot\u201d, where perturbation saliency is better than\npredicted), or vise versa (\u201d Undershoot\u201d). Examples are considered from several perceptual threshold\nranges (+2 dB of shown number).\nThe DNN correlate of perceptual threshold we used was the average L; change in DNN computatior\nbetween added-noise images and the original, noiseless image. Formally,\nLy\" (1) = |a; (I + noise (n)) \u2014 a; (D)],\nwhere a; (X) is the activation value of neuron 7 during the DNN feedforward pass for input image\nX, and the inner average (denoted by bar) is taken over repetitions with random n-sized noise (noise\nis introduced at random phase spectra in a fixed image location, an augmentation that follows the\nbetween-image randomization described by Alam et al., 2014; the number of repetitions was 10 o1\nmore). Unless otherwise specified, the final L; prediction is L'\u2019\" averaged across noise levels (\u20144(\nto 25 dB with 5-dB intervals) and computational neurons (first within and then across computa-\ntional stages). Using L averaged across noise levels as a correlate for the noise level of perceptual\nthreshold is a simple approximation with minimal assumptions.\nResults show that the L; metric is correlated with the perceptual threshold for all tested DNN archi-\ntectures (Figure 1b, 4a-c). In other words, higher values of the L, metric (indicating larger changes\nin DNN computation due to image perturbation, consistent with higher perturbation saliency) are\nTo quantify and compare predictive power, we considered the percent of linearly explained vari-\nability (R?). For all tested DNN architectures, the prediction explains about 60% of the perceptual\nvariability (Tables 1, 2; baselines at Tables 3-5), where inter-person similarity representing theoreti-\ncal maximum is 84% (Alam et al., 2014). The DNN prediction is far more accurate than a prediction\nbased on simple image statistical properties (e.g. RMS contrast), and is on par with a detailed per-\nceptual model that relies on dozens of psychophysically collected parameters (Alam et al., 2014).\nThe Spearmann correlation coefficient is much higher compared with the perceptual model (with an\nabsolute SROCC value of about 0.79 compared with 0.70, Table 1), suggesting that the L; metric\ngets the order right but not the scale. We did not compare these results with models that fit the\nexperimental data (e.g. Alam et al., 2015; Liu & Allebach, 2016), since the L; metric has no explicit\nparameters. Also, different DNN architectures exhibited high similarity in their predictions (R? of\nabout 0.9, e.g. Figure 4d).\nPrediction can also be made from isolated computational stages, instead of across all stages a:\nbefore. This analysis shows that the predictive power peaks mid-computation across all tested imag\u00ab\nscales (Figure 1c). This peak is consistent with use of middle DNN layers to optimize perceptua\nmetrics (Gatys et al., 2015a;b; Ledig et al., 2016), and is reminiscent of cases in which low- tc\nmid-level vision is the performance limiting computation in the detection of at-threshold stimul\n(Campbell & Robson, 1968; Del Cul et al., 2007).\nFinally, considering the images for which the L;-based prediction has a high error suggests a factot\nwhich causes a systematic inconsistency with perception (Figures 1d, 6). This factor may be relatec\nto the mean image luminance: by introducing noise perturbations according to the scale of Equatior\n1, a fixed noise size (in dB) corresponds to smaller pixel changes in dark compared with brigh\nimages. (Using this scales reflects an assumption of multiplicative rather than additive conservation\nthis assumption may be justified for the representation at the final but perhaps not the intermediate\ncomputational stages considering the log-linear contrast response discussed in Section 5). Anothet\nfactor may the degree to which image content is identifiable.\nTable 1: Prediction accuracy. Percent of linearly explained variability (R?), absolute value of Spear-\nman rank-order correlation coefficient (SROCC), and the root mean squared error of the linear\nprediction (RMSE) are presented for each prediction model. Note the measurement scale of the\nthreshold data being predicted (Eq. 1). (*) Thresholds linearized through a logistic transform be-\nfore prediction (see Larson & Chandler, 2010), possibly increasing but not decreasing measured\npredictive strength. (**) Average of four similar alternatives.\nThe previous analysis suggested gross computational similarity between human perception anc\ntrained DNNs. Next, we aimed to extend the comparison to more interpretable properties of per\nception by considering more highly controlled designs. To this end, we considered cases in which\nstatic background context modulates the difficulty of discriminating a foreground shape, despite nc\n\nspatial overlap of foreground and background. This permits interpretation by considering the cause\nof the modulation.\nWe first consider segmentation, in which arrangement is better discriminated for arrays of consis\ntently oriented lines compared with inconsistently oriented lines (Figure 2a) (Pinchuk- Yacobi et al.\n2016). Crowding is considered next, where surround clutter that is similar to the discriminate:\ntarget leads to deteriorated discrimination performance (Figure 2b) (Livne & Sagi, 2007). Last t\nbe addressed is object superiority, in which a target line location is better discriminated when i\nis in a shape-forming layout (Figure 2c) (Weisstein & Harris, 1974). In this case, clutter is con\ntrolled by having the same fixed number of lines in context. To measure perceptual discrimination\nthese works introduced performance-limiting manipulations such as location jittering, brief presen\ntation, and temporal masking. While different manipulations showed different measured values\norder-of-difficulty was typically preserved. Here we changed all the original performance-limitin;\nmanipulations to location jittering (whole-shape or element-wise, see Section 8.4).\nTo quantify discrimination difficulty in DNNs, we measured the target-discriminative information of\nisolated neurons (where performance is limited by location jittering noise), then averaged across all\nneurons (first within and then across computational layer stages). Specifically, for each neuron, we\nmeasured the reduction in categorization uncertainty due to observation, termed mutual information\n(MT):\nwhere H stands for entropy, and A; is a random variable for the value of neuron i when the DNN\nprocesses a random image from a category defined by the random variable C. For example, if a\nneuron gives a value in the range of 100.0 to 200.0 when the DNN processes images from category\nA, and 300.0 to 400.0 for category B, then the category is always known by observing the value, and\nso mutual information is high (MI=1 bits). On the other extreme, if the neuron has no discriminative\ntask information, then MI=0 bits. To measure MI, we quantized activations into eight equal-amount\nbins, and used 500 samples (repetitions having different location jittering noise) across categories.\nThe motivation for this correlate is the assumption that the perceptual order-of-difficulty reflects the\nquantity of task-discriminative information in the representation.\nResults show that, across hundreds of configurations (varying pattern element size, target location\njitter magnitude, and DNN architecture; see Section 8.4), the qualitative order of difficulty in term:\nof the DNN MI metric is consistent with the order of difficulty measured in human psychophysica\nexperiments, for the conditions addressing segmentation and crowding (Figures 2d, 7; for baselin\nmodels see Figure 8). It is interesting to note that the increase in similarity develops gradually alon;\ndifferent layer types in the DNN computation (i.e. not just pooling layers), and is accompaniec\nby a gradual increase in the quantity of task-relevant information (Figure 2e-g). This indicates ;\nlink between task relevance and computational similarity for the tested conditions. Note that unlik\nthe evident increase in isolated unit task information, the task information from all units combinec\ndecreases by definition along any computational hierarchy. An intuition for this result is that th\ntotal hidden information decreases, while more accessible per-unit information increases.\nFor shape formation, four out of six shapes consistently show order of difficulty like perception, anc\ntwo shapes consistently do no (caricature at Figure 2h; actual data at Figure 9).\nMI(A;;C) =\n\nH(C)-H\n\n(C\\Ai),\na Segmentation 6 Crowding \u00b0 Shape\n\u2014 7 J -\u2014\n>\na\\\u2014 /\u2014\\||Z7 77 A B FE ey\n| ae\n74] EN N ;\ns|-\u2014\u2014||7\u2014-7| |TAS|TBS mer\n\u2014v-|---) | mM |iM | [= =\nSegmentation 1 Crowding 9 Shape\n0.3; 0.6;\noS \u2014 Easy\n0.2, \u2014 Hard 0.4 0.2\n= 0.1 0.2 0.4\nNN XN XN NN XN XN NN XN XN\nSe? Wo? EF BIS ww? EE Sarvs? wo? KS\nS 8 SS SS 8 Serr 8\nOPS SF SF OP FP SF OF PS PS\n\nComputational stage\n\na\n\nOConsistent\n@ Inconsistent\n\n\u00ae\n<\nS90\n3 66\n560\n>\n= 30 4\ni} lo | 2\nfo\n\u20ac\n@\n2 RS SR\nWP Ss\n\n& 6S\n\neo\n&\nh\n\nInconsistent\n\nConsistent 21) 4)\nSl\nPerception\nFigure 2: Background context. a-c, Illustrations of reproduced discrimination stimuli for three\npsychophysical experiments (actual images used were white-on-black rather than black-on-white\nand pattern size was smaller, see Figures 12-14). d, Number of configurations for which order.\nof-difficulty in discrimination is qualitatively consistency with perception according to a mutua\ninformation DNN metric. Configurations vary in pattern (element size, target location, and jitte:\nmagnitude; see Section 8.4) and in DNN architecture used (CaffeNet, GoogLeNet, VGG-19, anc\nResNet-152). DNN metric is the average across neurons of the isolated neuron target-discriminative\ninformation (averaged first within, and then across computational layer stages), where performance\nis limited by location jittering (e.g. evident jitter in illustrations). e-g, The value of the MI metric\nacross computational layers of model VGG-19 for a typical pattern configuration. The six hard\u2019\n(gray) lines in Shape MI correspond to six different layouts (see Section 8.4.3). Analysis shows tha\nfor isolated computation stages, similarity to perception is evident only at the final DNN computatior\nstages. h, A caricature summarizing the similarity and discrepancy of perception and the MI-basec\nDNN prediction for Shape (see Figure 9).\nA cornerstone of biological vision research is the use of sine gratings at different frequencies, ori-\nentations, and contrasts (Campbell & Robson, 1968). Notable are results showing that the lowest\nperceivable contrast in human perception depends on frequency. Specifically, high spatial frequen-\ncies are attenuated by the optics of the eye, and low spatial frequencies are believed to be attenuated\ndue to processing inefficiencies (Watson & Ahumada, 2008), so that the lowest perceivable contrast\nis found at intermediate frequencies. (To appreciate this yourself, examine Figure 3a). Thus, for\nlow-contrast gratings, the physical quantity of contrast is not perceived correctly: it is not preserved\nacross spatial frequencies. Interestingly, this is corrected for gratings of higher contrasts, for which\nperceived contrast is more constant across spatial frequencies (Georgeson & Sullivan, 1975).\nThe DNN correlate we considered is the mean absolute change in DNN representation between a\ngray image and sinusoidal gratings, at all combinations of spatial frequency and contrast. Formally,\nfor neurons in a given layer, we measured:\n11 (contrast, frequency) = a; (contrast, frequency) \u2014 a; (0,0)}.\n\nNneurons\ni=l\nResults show a bandpass response for low-contrast gratings (blue lines strongly modulated by fre-\nquency, Figures 3, 10), and what appears to be a mostly constant response at high contrast for\nend-computation layers (red lines appear more invariant to frequency), in accordance with percep.\ntion.\nWe next aimed to compare these results with perception. Data from human experiments is general]\niso-output (i.e. for a pre-set output, such as 75% detection accuracy, the input is varied to find th\nvalue which produce the preset output). However, the DNN measurements here are iso-input (i.\nfor a fixed input contrast the L; is measured). As such, human data should be compared to the intet\npoalted inverse of DNN measurements. Specifically, for a set output value, the interpolated contras\nvalue which produce the output is found for every frequency (Figure 11). This analysis permit\nquantifying the similarity of iso-output curves for human and DNN, measured here as the percent o\nlog-Contrast variability in human measurements which is explained by the DNN predictions. Thi\nshowed a high explained variability at the end computation stage (prob layer, R? = 94%), bu\nimportantly, a similarly high value at the first computational stage (Conv1_1 layer, R? = 96%\nIntiutively, while the \u201dinternal representation\u201d variability in terms of L; is small, the iso-outpu\nnumber-of-input-contrast-cahnges variability is still high. For example. for the prob layer, abou\nthe same L, is measured for (Contrast=1.freq=75) and for (Contrast=0.18.freq=12).\nAn interesting, unexpected observation is that the logarithmically spaced contrast inputs are linearly\nspaced at the end-computation layers. That is, the average change in DNN representation scales\nlogarithmically with the size of input change. This can be quantified by the correlation of output Ly\nwith log Contrast input, which showed R? = 98% (averaged across spatial frequencies) for prob,\nwhile much lower values were observed for early and middle layers (up to layer fC7). The same\ncomputation when scrambling the learned parameters of the model showed R? = 60%. Because the\ndegree of log-linearity observed was extremely high, it may be an important emergent property of\nthe learned DNN computation, which may deserve further investigation. However, this property is\nonly reminiscent and not immediately consistent with the perceptual power-law scaling (Gottesman\net al., 1981).\n\u00bb\n\nContrast\n\na\nFrequency\n\ndata\n24\no\n\u20ac\n72\n0\n1 7 75\n\nFrequency (cycles/image)\n\n\u20140.72\n\u2014o5\n\u2014 0.36\n\u2014\u2014 0.26\n0.18\n0.126\n0.086\n\u2014\u2014 0.062\n\u2014\u2014 0.046\n\u2014 0.032\n0.024\n\u20140.015\n\u2014 0.007\n\nconv1_1 fc8 ax 40\u00b0prob\n\n15 1.5\n\n8\n6\n\n4 1 1\n2 05 05\n0\n\n1 7 75 1 7 75 1 7 75\n\u00bb\n\nContrast\n\na\nFrequency\n\ndata\n\n8\n\n24 6\n3\n\n\u2014 4\n\nr2\n- 2\n0 0\n1 7 75\n\nFrequency (cycles/image)\n\nconv1_1\n\n\u2014 0.72\n\u20140s\n\u20140.36\n\u20140.26\n0.18\n0.126\n0.086\n\u20140.062\n\u20140.046\n\u2014 0.032\n0 0 0.024\n1 7 75 1 7 75 \u20140.0156\n\u2014 0.0078\n\nfc8 ax 40\u00b0prob\n\n15 1.5\n4 1\n0.5 0.5\nFigure 3: Contrast sensitivity. a. Perceived contrast is strongly affected by spatial frequency a\nlow contrast, but less so at high contrast (which preserves the physical quantity of contrast and thu:\ntermed constancy). b. The L; change in VGG-19 representation between a gray image and image:\ndepicting sinusoidal gratings at each combination of sine spatial frequency (x-axis) and contras\n(color) (random orientation, random phase), considering the raw image pixel data representatiot\n(data), the before-ReLU output of the first convolutional layer representation (COnv1_1), the out\nput of the last fully-connected layer representation (fc8), and the output class label probabilitie:\nrepresentation (prob).\nwhere a; (contrast, frequency) is the average activation value of neuron i to 250 sine images\n(random orientation, random phase), a; (0,0) is the response to a blank (gray) image, and Nnewrons\nis the number of neurons in the layer. This measure reflects the overall change in response vs. the\noray image.\nIt may be tempting to believe that what we see is the result of a simple transformation of visua\ninput. Centuries of psychophysics have, however, revealed complex properties in perception, by\ncrafting stimuli that isolate different perceptual properties. In our study, we used the same stimuli tc\ninvestigate the learned properties of deep neural networks (DNNs), which are the leading compute:\nvision algorithms to date (LeCun et al., 2015).\nThe DNNs we used were trained in a supervised fashion to assign labels to input images. To some\ndegree, this task resembles the simple verbal explanations given to children by their parents. Since\nhuman perception is obviously much richer than the simple external supervision provided, we were\nnot surprised to find that the best correlate for perceptual saliency of image changes is a part of the\nDNN computation that is only supervised indirectly (i.e. the mid-computation stage). This similarity\nis so strong, that even with no fine-tuning to human perception, the DNN metric is competitively\naccurate, even compared with a direct model of perception.\nThis strong, quantifiable similarity to a gross aspect of perception may, however, reflect a mix of sim\nilarities and discrepancies in different perceptual properties. To address isolated perceptual effects\nwe considered experiments that manipulate a spatial interaction, where the difficulty of discrimi\nnating a foreground target is modulated by a background context. Results showed modulation o\nDNN target diagnostic, isolated unit information, consistent with the modulation found in percep\ntual discrimination. This was shown for contextual interactions reflecting grouping/segmentatiot\n(Harris et al., 2015), crowding/clutter (Livne & Sagi, 2007; Pelli et al., 2004), and shape superiority\n(Weisstein & Harris, 1974). DNN similarity to these groupings/gestalt phenomena appeared at th\nend-computation stages.\nNo less interesting, are the cases in which there is no similarity. For example, perceptual effects\nrelated to 3D (Erdogan & Jacobs, 2016) and symmetry (Pramod & Arun, 2016) do not appear to have\na strong correlate in the DNN computation. Indeed, it may be interesting to investigate the influence\nof visual experience in these cases. And, equally important, similarity should be considered in terms\nof specific perceptual properties rather than as a general statement.\nIn the human hierarchy of visual processing areas, information is believed to be processed in a feed-\nforward sweep, followed by recurrent processing loops (top-down and lateral) (Lamme & Roelf:\nsema, 2000). Thus, for example, the early visual areas can perform deep computations. Since\nmapping from visual areas to DNN computational layers is not simple, it will not be considered\nhere. (Note that ResNet connectivity is perhaps reminiscent of unrolled recurrent processing).\nInterestingly, debate is ongoing about the degree to which visual perception is dependent on re\ncurrent connectivity (Fabre-Thorpe et al., 1998; Hung et al., 2005): recurrent representations ar\nobviously richer, but feedforward computations converge much faster. An implicit question her\nregarding the extent of feasible feed-forward representations is, perhaps: Can contour segmentatio1\ncontextual influences, and complex shapes be learned? Based on the results reported here for feed\nforward DNNs, a feedforward representation may seem sufficient. However, the extent to which thi\nis true may be very limited. In this study we used small images with a small number of lines, whil\neffects such as contour integration seem to take place even in very large configurations (Field et al\n1993). Such scaling seems more likely in a recurrent implementation. As such, a reasonable hy\npothesis may be that the full extent of contextual influence is only realizable with recurrence, whil\nfeedforward DNNs learn a limited version by converging towards a useful computation.\nThe use of DNNs in modeling of visual perception (or of biological visual systems in general) is\nsubject to a tradeoff between accuracy and biological plausibility. In terms of architecture, other\ndeep models better approximate our current understanding of the visual system (Riesenhuber &\nPoggio, 1999; Serre, 2014). However, the computation in trained DNN models is quite general\npurpose (Huh et al., 2016; Yosinski et al., 2014) and offers unparalleled accuracy in recognition task\n(LeCun et al., 2015). Since visual computations are, to some degree, task- rather than architecture\ndependent, an accurate and general-purpose DNN model may better resemble biological processin;\nthan less accurate biologically plausible ones (Kriegeskorte, 2015; Yamins & DiCarlo, 2016). W\nsupport this view by considering a controlled condition in which similarity is not confounded witl\ntask difficulty or categorization consistency."}, {"section_index": "2", "section_name": "6.3.2 USEIN PSYCHOPHYSICS", "section_text": "Our results imply that trained DNN models have good predictive value for outcomes of psychophys-\nical experiments, permitting a zero-cost first-order approximation. Note, however, that the scope of\nsuch simulations may be limited, since learning (Sagi, 2011) and adaptation (Webster, 2011) were\nnot considered here.\nAs proposed previously (Dosovitskiy & Brox, 2016; Johnson et al., 2016; Ledig et al., 2016), the\nsaliency of small image changes can be estimated as the representational distance in trained DNNs.\nHere, we quantified this approach by relying on data from a controlled psychophysical experiment\n(Alam et al., 2014). We found the metric to be far superior to simple image statistical properties,\nand on par with a detailed perceptual model (Alam et al., 2014). This metric can be useful in image\ncompression, whereby optimizing degradation across image sub-patches by comparing perceptual\nloss may minimize visual artifacts and content loss."}, {"section_index": "3", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Yoram Bonneh for his valuable questions which led to much of this work."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Md Mushfiqul Alam, Kedarnath P Vilankar, David J Field, and Damon M Chandler. Local masking\nin natural images: A database and analysis. Journal of vision, 14(8):22-, jan 2014. ISSN 1534-\n7362. doi: 10.1167/14.8.22.\nAnother fascinating option is the formation of hypotheses in terms of mathematically differentiable\ntrained-DNN constraints, whereby it is possible to efficiently solve for the visual stimuli that opti-\nmally dissociate the hypotheses (see Gatys et al. 2015a;b; Mordvintsev et al. 2015 and note Goodfel-\nlow et al. 2014; Szegedy et al. 2013). The conclusions drawn from such stimuli can be independent\nof the theoretical assumptions about the generating process (for example, creating new visual illu-\nsions that can be seen regardless of how they were created).\nMatteo Carandini, Jonathan B Demb, Valerio Mante, David J Tolhurst, Yang Dan, Bruno A Ol-\nshausen, Jack L Gallant, and Nicole C Rust. Do we know what the early visual system\ndoes? The Journal of Neuroscience, 25(46):10577-97, nov 2005. ISSN 1529-2401. doi:\n10.1523/JNEUROSCI.3726-05.2005.\nAntoine Del Cul, Sylvain Baillet, and Stanislas Dehaene. Brain dynamics underlying the nonlinez\nthreshold for access to consciousness. PLoS Biol, 5(10):e260, 2007. ISSN 1545-7885.\nAlexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics basec\non deep networks. arXiv preprint arXiv: 1602.02644, 2016.\nGoker Erdogan and Robert A Jacobs. A 3D shape inference model matches human visual object\nsimilarity judgments better than deep convolutional neural networks. In Proceedings of the 38th\nAnnual Conference of the Cognitive Science Society. Cognitive Science Society Austin. TX. 2016.\nMich\u00e9le Fabre-Thorpe, Ghislaine Richard, and Simon J Thorpe. Rapid categorization of natural\nimages by rhesus monkeys. Neuroreport, 9(2):303\u2014308, 1998. ISSN 0959-4965.\nDavid J Field, Anthony Hayes, and Robert F Hess. Contour integration by the human visual system:\nevidence for a local association field. Vision research, 33(2):173\u2014193, 1993. ISSN 0042-6989.\nItzhak Fogel and Dov Sagi. Gabor filters as texture discriminator. Biological cybernetics, 61(2):\n103-113, 1989. ISSN 0340-1200.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. A Neural Algorithm of Artistic Style.\naug 2015a.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis and the controlled\ngeneration of natural stimuli using convolutional neural networks. may 2015b.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,\nAaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-\nmation Processing Systems, pp. 2672-2680, 2014.\nJon Gottesman, Gary S Rubin, and Gordon E Legge. A power law for perceived contrast in human\nvision. Vision research, 21(6):791-799, 1981. ISSN 0042-6989.\nHila Harris, Noga Pinchuk-Yacobi, and Dov Sagi. Target selective tilt-after effect during texture\nlearning. Journal of vision, 15(12):1134, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image\nRecognition. dec 2015.\nM A Georgeson and G D Sullivan. Contrast constancy: deblurring in human vision by spatial\nfrequency channels. The Journal of Physiology, 252(3):627\u2014656, 1975. ISSN 1469-7793.\nHinton and Salakhutdinov. Reducing the dimensionality of data with neural networks. Science (Nev\nYork, N.Y.), 313(5786):504~-7, jul 2006. ISSN 1095-9203. doi: 10.1126/science. 1127647.\nJustin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer an\nsuper-resolution. arXiv preprint arXiv: 1603.08155, 2016.\nvisual cortex plasticity. Proceedings of the National Academy of Sciences, 88(1 1):4966-4970, jun\n1991. ISSN 0027-8424. doi: 10.1073/pnas.88.11.4966.\n\nSeyed-Mahdi Khaligh-Razavi and Nikolaus Kriegeskorte. Deep Supervised, but Not Unsuper-\nvised, Models May Explain IT Cortical Representation. PLoS Computational Biology, 10(11):\ne1003915, nov 2014. ISSN 1553-7358. doi: 10.137 1/journal.pcbi.1003915.\n\nNikolaus Kriegeskorte. Deep neural networks: A new framework for modeling biological vision\nand brain information processing. Annual Review of Vision Science, 1:417-446, 2015. ISSN\n2374-4642.\n\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet Classification with Deep Con-\nvolutional Neural Networks. In Advances in Neural Information Processing Systems, pp. 1097\u2014\n1105, 2012.\n\nJonas Kubilius, Stefania Bracci, and Hans P Op de Beeck. Deep Neural Networks as a Compu-\ntational Model for Human Shape Sensitivity. PLoS Comput Biol, 12(4):e1004896, 2016. ISSN\n1553-7358.\n\nVictor A.F. Lamme and Pieter R. Roelfsema. The distinct modes of vision offered by feedforward\nand recurrent processing. Trends in Neurosciences, 23(11):571-579, nov 2000. ISSN 01662236.\ndoi: 10.1016/S0166-2236(00)01657-X.\n\nEric C Larson and Damon M Chandler. Most apparent distortion: full-reference image quality\nassessment and the role of strategy. Journal of Electronic Imaging, 19(1):11006, 2010. ISSN\n1017-9909.\n\nY. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document\nrecognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. ISSN 00189219. doi:\n10.1109/5.726791.\n\nYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436\u2014-444,\nmay 2015. ISSN 0028-0836. doi: 10.1038/nature14539.\n\nap A dence, Alth ae Alesbhan T\nYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436\u201444\u00a2\nmay 2015. ISSN 0028-0836. doi: 10.1038/nature 14539.\nHonglak Lee, Chaitanya Ekanadham, and Andrew Y. Ng. Sparse deep belief net model for visual\narea V2. In Advances in Neural Information Processing Systems, pp. 873-880, 2008.\nYixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent Learning: Di\ndifferent neural networks learn the same representations? arXiv preprint arXiv: 1511.07543, 2015\nTomer Livne and Dov Sagi. Configuration influence on crowding. Journal of Vision, 7(2):4, 2007\nISSN 1534-7362.\nHonglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. Convolutional deep belief\nnetworks for scalable unsupervised learning of hierarchical representations. In Proceedings of\nthe 26th Annual International Conference on Machine Learning - ICML \u201909, pp. 1-8, New York,\nNew York, USA, jun 2009. ACM Press. ISBN 9781605585161. doi: 10.1145/1553374.1553453.\nPeter Neri, Andrew J Parker, and Colin Blakemore. Probing the human stereoscopic system with\nreverse correlation. Nature, 401(6754):695\u2014698, 1999. ISSN 0028-0836.\nBruno A Olshausen. Emergence of simple-cell receptive field properties by learning a sparse code\nfor natural images. Nature, 381(6583):607\u2014609, 1996. ISSN 0028-0836.\nDenis G Pelli, Melanie Palomares, and Najib J Majaj. Crowding is unlike ordinary masking: Distin\nguishing feature integration from detection. Journal of vision, 4(12):12, 2004. ISSN 1534-7362.\nNoga Pinchuk-Yacobi, Ron Dekel, and Dov Sagi. Expectation and the tilt aftereffect. Journal o\nvision, 15(12):39, sep 2015. ISSN 1534-7362. doi: 10.1167/15.12.39.\nU Polat and D Sagi. Lateral interactions between spatial channels: suppression and facilitation\nrevealed by lateral masking experiments. Vision research, 33(7):993-9, may 1993. ISSN 0042-\n6989.\n\nR T Pramod and S P Arun. Do computational models differ systematically from human object\nperception? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,\npp. 1601-1609, 2016.\nMaximilian Riesenhuber and Tomaso Poggio. Hierarchical models of object recognition in cortex\nNature neuroscience, 2(11):1019\u20141025, 1999.\nJohannes D Seelig and Vivek Jayaraman. Feature detection and orientation tuning in the Drosophilz\ncentral complex. Nature, 503(7475):262\u2014266, 2013. ISSN 0028-0836.\nThomas Serre. Hierarchical Models of the Visual System. In Encyclopedia of Computational Neu-\nroscience, pp. 1-12. Springer, 2014. ISBN 1461473209.\nEero P Simoncelli and William T Freeman. The steerable pyramid: a flexible architecture for multi\nscale derivative computation. In JCIP (3), pp. 444447, 1995.\nKaren Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image\nRecognition. sep 2014.\n\u201christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow\nand Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv: 1312.6199, 2013\nAndrew B Watson and Albert J Ahumada. Predicting visual acuity from wavefront aberrations\nJournal of vision, 8(4):17.1\u201419, jan 2008. ISSN 1534-7362. doi: 10.1167/8.4.17.\nMichael A Webster. Adaptation and visual coding. Journal of vision, 11(5), jan 2011. ISSN 1534\n7362.\nN. Weisstein and C. S. Harris. Visual Detection of Line Segments: An Object-Superiority Effect:\nScience, 186(4165):752\u2014755, nov 1974. ISSN 0036-8075. doi: 10.1126/science.186.4165.752.\nDaniel L K Yamins and James J DiCarlo. Using goal-driven deep learning models to understand\nsensory cortex. Nature neuroscience, 19(3):356\u2014365, 2016. ISSN 1097-6256.\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep\nneural networks? In Advances in neural information processing systems, pp. 3320-3328, 2014.\nMatthew D Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. nov\n2013.\n7hou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object Detectors Emerge\nin Deep Scene CNNs. pp. 12, dec 2014.\n\u00bb\n\nL, CaffeNet\n\n-60 -40 -20 0\nPerceptual threshold\n(dB)\n\na\n\nL, GoogLeNet\n\n-60 -40 -20 0\nPerceptual threshold\n(dB)\n\n\u00b0\n\nL, ResNet-152\n\n0.2\n\n0.15\n\n0.1\n\n0.05\n\n-60 -40 -20 0\nPerceptual threshold\n(dB)\n\n[ow\n\nL, GoogLeNet\n\nL, VGG-19\nFigure 4: Predicting perceptual sensitivity to image changes (following Figure 1). a-c, The L,\nchange in CaffeNet, GoogLeNet, and ResNet-152 DNN architectures as a function of perceptual\nthreshold. d, The L, change in GoogLeNet as a function of the LZ, change in VGG-19.\nVGG-19\n\n\u2014L,\n\u2014 Best isolated\n\nComputational stage\n\nResNet-152\n0.6|\n0.4\n0.2\n0\nVeh\n\n\u2014L,\n\u2014 Best isolated\n\nPICONGl ive\n\n0.6|\n\n0.4]\n\n0.2|\nFigure 5: Prediction accuracy as a function of computational stage. a, Predicting perceptual sen-\nsitivity for model VGG-19 using the best single kernel (i.e. using one fitting parameter, no cross\nvalidation), vs. the standard L; metric (reproduced from Figure 1). b, For non-branch computa-\ntional stages of model ResNet-152.\nModel R? SROCC RMSE_ Recognition accuracy\n\nCaffeNet 59 .78 5.44 56%\nGoogLeNet .59 .79 5.45 66%\nVGG-19 60.79 5.40 10%\n\nResNet-152. 53.74 5.82 75%\nModel R\u00b0 SROCC RMSE _ Recognition accuracy\n\nCaffeNet 59 .78 5.44 56%\nGoogLeNet .59 .79 5.45 66%\nVGG-19 60.79 5.40 10%\n\nResNet-152. 53.74 5.82 75%\nTable 2: Accuracy of perceptual sensitivity prediction and task-trained ImageNet center-crop top-]\nvalidation accuracy for different DNN models (following Table 1 from which third row is repro.\nduced; used scale: 100%). The quality of prediction for ResNet-152 improves dramatically if only\nthe first tens of layers are considered (see Figure 5b).\nTable 3: Accuracy of perceptual sensitivity prediction for baseline models (see Section 8.2; used\nscale: 100%).\nModel\n\nR2\n\nSROCC RMSE\n\nRecognition accuracy\n\nCaffeNet iter 1\nCaffeNet iter 50K\nCaffeNet iter 100K\nCaffeNet iter 150K\nCaffeNet iter 200K\nCaffeNet iter 250K\nCaffeNet iter 300K\nCaffeNet iter 310K\n\n46\n59\n60\n\n59\n59\n59\n59\n\n67\n719\n719\n78\n78\n78\n78\n78\n\n6.30\n5.43\n5.41\n5.43\n5.45\n5.43\n5.44\n5.44\n\n0%\n\n37%\n39%\n53%\n54%\n56%\n56%\n56%\nTable 4: Accuracy of perceptual sensitivity prediction during CaffeNet model standard training (used\nscale: 100%). Last row reproduced from Table 2.\nTable 5: Robustness of perceptual sensitivity prediction for varying prediction parameters for model\nVGG-19. First three rows reproduced from Table 1. Measurements for the lower noise range of\n-60:-40 dB were omitted by mistake.\nscale Metric Augmentation Noiserange f\u00b0 SKOCUC RMSE\n\n100% = Ly noise phase -40:25 dB 60 .79 5.40\n66% Ly noise phase -40:25 dB 60 .79 5.42\n50% Ly noise phase -40:25 dB 57.77 5.57\n100% Ly noise phase -40:25 dB 62 .80 5.29\n100% Ly None -40:25 dB 58.77 5.55\n100% Ly noise phase -20:25 dB 59.78 5.46\n\n100% Ly noise phase -40:5 dB 59 .79 5.43\nModel Day1 Days2-4 Masked\n\nVGG-19 36 37 nh)\nGoogLeNet 31 22 16\nMRSA-152 .26 26 ll\nCaffeNet iter 1 32 29 39\nCaffeNet iter SOK 15 19 16\nCaffeNet iter 310K 16 12 18\nGabor Decomposition .26 27 48\n\nSteerable Pyramid .24 32 .25\nTable 6: Background context for Shape. Shown is the Spearmann correlation coefficient (SROCC)\nof perceptual data vs. model-based MI prediction across shapes (i.e. considering all shapes rather\nthan only Easy vs. Hard; note that the original robust finding the superiority of the Easy shape).\nPerceptual data from Weisstein & Harris (1974), where Day 1\u201d and \u2019Days 2-4\u201d (averaged) are for\nthe reduced-masking condition depicted in their Figure 3.)\nVGG-19, scrambled VGG-19\n\nGabor decomp.\n\nOvershoot\n\nUndershoot\n\nOvershoot\n\nUndershoot\n\nOvershoot\n\nUndershoot\n\nPerceptual threshold\n<-45 aB -40dB -35dB -30dB -25dB -20dB >15dB\n\n-20dB >15dB\n\n<45dB -40dB -35dB -30dB -25dB -20dB_ >-15dE\n\ntet I\ner er ht\nFigure 6: Images where predicted threshold is too high (\u2019Overshoot\u201d, where perturbation saliency\nis better than predicted) or too low (\u201dUndershoot\u2019\u201d), considered from several perceptual threshold\nranges (+2 dB of shown number). Some images are reproduced from Figure 1.\nFigure 7: Background context for different DNN models (following figure\n\nGoogLeNet\nQn\nSs 90 90 82\n\u00a9\n\u00a7 60 57\n2 30 of\n8 lo || 18\nof 0\n\u00a3\n2 RS RS ES)\neS\ng eo\nSs\noe\nCaffeNet iter 1\n90 774 75\n60 50 0\n30 6 5\n0\nQn :\ns Gabor Decomposition\nFa 85\n5 90 82\n> 62\n= 60 >\n\u00a9 30\n\u2014 , LIE\n5 0\ne \u00b0 @\n$\nSs f\n& RS x\n& 6S\ncs iS)\n&\n\nFigure 8: Background context for baseline DNN models (following figure 2). Caffe\n'< renrodiuced fram Fionre 7\n\nResNet-152\n90 90\n65\n60 4g?\n30 25\n0\nie}\n\nCaffeNet iter 50K\n\n90 90\n90\n\n60\n30\n0\n\nSteerable Pyramid\n\n90 - 80\n55\n\n60 48\n\n30\n\n5\n\nCaffeNet\n\n90 -86 89\n\n57\n133\n\n60\n30\n\nCaffeNet iter 310K\n\n90 86 89\n60 57\n33\n30\n4 1\nie}\nOConsistent\n\n@ Inconsistent\nFigure 7: Background context for different DNN models (following figure 2).\nCaffeNet iter 1\n\n90 774 75\n60 50 0\n30 6 5\n0\nQn :\ns Gabor Decomposition\ng 90 , 82 85\n2 62\n\u00a9 60 >\n\u00a9 30\n\u2014 , LIE\n5 0\nc \u00b0 @\n$\noe cS)\n& RS x\n&\n\n%&\n\nCaffeNet iter 50K\n\nSteerable Pyramid\n90 80\n55\n60 48\n30\n\n5\n\nCaffeNet iter 310K\n\n90 86 89\n60 57\n33\n30\n4 1\nie}\nOConsistent\n\n@ Inconsistent\nFigure 8: Background context for baseline DNN models (following figure 2). *CaffeNet iter 310K\u2019\nis reproduced from Figure 7.\n. Tt :\nrat uy 17 ~ a\n3 \u2014\u201cll\nig -\nne)\npea a a 2 \u00a9 \u00b0\na \u00a9 is] =\no x=\nni} o } fe) x +\nRm \\ \u00b0 \\\n0g + \\ x00 ~ \u00b0\n\u00b0 al . Pan 8 \\\no FY \\ a\na 5 \u2018\na a ROKK 3 ore 8 \\\n3 \\S 3 x a |\nz a Se & 3 ao a ~ 5 \\ @ $x oO\n3 D GQdOg att 2 \\\niy 3 \u00ab \u2018gex x\naiyce| RB oS oa, pow\n5 re} N = \\\nog se | o> fe x \\\nDS Ot yf\n* Bn AP \u201cs 5 y z 2\n2 e s 3 (%) prey Aoeunooy\nSe * \u2018 & *\nNe . Ddoseca 3 &\nco Yaget xe ZS ee ON -\ngoo x $ ovotx 5 oe\nBo ome s \\\njo NG Hx 2 \\ % 3\n= o cy DOB x\n> o + x z N 3 ae\n3 of ad \u20ac 20, \u00a9 Sex 3 \u2018\ngo SQ, * 6 a 8k S% % a\nne Qo 4 \u00b0 \\ \"e\nD> AO ow Bete 7 RR\nOg>x + 2 %\nene Mo xa 5 <\nchox ae 2 Oy\no = took \u00bb,\n; o ae. + 3 fen ye 5 Oo\n3 Dex + xt 3 ko x0 8 * \u201c\n@ x\n2 ot hy \u20ac ose g we\nS 5\n. \u00e9 nay 8 \\\nao Ne ae 8 \u2018\\\nso Xe . yon] s\nGu) PIeEH IW (SHq) PJEH IW.\n\n6466 70\n\na\n\n60\n\n107\n\n107\n\nMI Easy (bits)\n\n107\n\nAccuracyEasy (%)\n\u2014ses\n\n\u2014 ae\n\n2\n0\n\n(SUq) PJEH IW.\n\n10?\n\n10?\n\n107\n\n10?\n\n107\n\n10?\na 7 te Ay Gy 41\n7]\ng a) 2 o \u00b0\n\u00a9 | e\nx\no le) x +\n\\ 8 \\ \u00ae\n\\ xt00 c Vg a o\nBe YOOQ 5 \\ Fy\na N a\nB ooo + S\nP= o\n3 a o REY x a \\ o axe |g\n2 goog pt id \\ 5 \\\noh Od \u20183 Bo\u00b0S York x. Bowl. to x |\n=) oO Ny = \\ S\nce | no (fe x \\\n15 oH sa\niW i 8 \u201c (96) preHAoBunooy\n> \\ \"e 5\n\\ + Dose > %\n3S 2 8\nNai xk eB ye XN zOS\no e+ x $ oO \\o+x okay\n5 \u00a9 OD. ox 5\na NG Kx 3 xe >\n\\ 3 a 6Px BE\nPowe H 5 20,9 Sox 3\nBo 8 * 4 Fk \u00ae\n8 x %\ni OF om capo 8\n\" ~\na 2 2 Se\nPe <\nxh Sm\n. se\nNOX xt 7 N .\nBSOX + 5 f Ok %\n2 Bt * 3 E oe\nDe + xt 3 3 PRS x\n=e See 5 Q 4H\n\\ \u00e9 s\nao No x o\na> Ne + ~ oO\n\\ 7 2 \u201d\nSHO) PIEH I PIeH IW (SHO) PIEH I\n\nAccuracyEasy (%)\n\nMiEasy (bits)\nFigure 9: Background context for Shape. Shown for each model is the measured MI for the six\nHard\u201d shapes as a function of the MI for the Easy\u201d shape. The last panel shows an analagous\ncomparison measured in human subjects by Weisstein & Harris (1974). A data point which lies\nbelow the dashed diagonal indicates a configuration for which discriminating line location is easier\nfor the Easy shape compared with the relevant Hard shape.\ndata conv1\n\n8\n34 6\nOo\nis\noO 4\na?\n2\n0 0\n1 7 75 1 7 75\ndata conv1\n15\n3\n4\na 10\nD\nfe}\n[e}\nO2 5\ni\n0 0\n1 7 75 1 7 75\ndata conv1\n8\nN\nw\n74 6\nOo\n3\n3 4\nr2\n_s 2\n0 0\n1 7 75 1 7 75\n\nFrequency (cycles/image)\n\n1.5\n\n0.5\n\n1.5\n\n0.5\n\n1.5\n\n0.5\n\nfc8\n\ncls3_fe\n\n7\n\nfc1000\n\n2 x10prob\n15\n1\n0.5\n0\n1 7\n2 x10prob\n15\n4\n0.5\n0\n1 7\n2 x10\"prob\n15\n1\n0.5\n0\n1 7\n\nContrast\n\u20141\n\u20140.72\n\u201405\n\u20140.36\n\u20140.26\n0.18\n0.126\n0.086\n\u20140.062\n\u20140.046\n\u20140.032\n\u20140.024\n\u2014 0.0156\n\u2014 0.0078\nFigure 10: Contrast sensitivity (following Figure 3) for DNN architectures CaffeNet, GoogLeNet,\nand ResNet-152.\nContrast\n\nContrast\n\n0.09\n\n2.18\nFrequency\n(cycles/deg. of vis. field)\n\n1.09\nFrequency\n(cycles/deg. of vis. field)\n\n26.79\n\n13.39\n\nVGG-19, prob\n\nContrast\n\n0.18 2.18 26.79\nFrequency\n(0.18 * cycles/image)\n\nContrast\n\n0.09 1.09 13.39\nFrequency\n(0.09 * cycles/image)\nFigure 11: Comparison of contrast sensitivity. Shown are iso-output curves, for which perceived\ncontrast is the same (Human), or for which the L, change relative to a gray image is the same (DNN\nmodel VGG-19). To obtain a correspondence between human frequency values (given in cycles pet\ndegree of visual field) to DNN frequency values (given in cycles per image), a scaling was chosen\nsuch that the minima of the blue curve is given at the same frequency value. Human data is for\nsubject M.A.G. as measured by Georgeson & Sullivan (1975)."}, {"section_index": "5", "section_name": "8.1 DNN MODELS", "section_text": "To collect DNN computation snapshots, we used MATLAB with MatConvNet version 1.0-beta2!\n(Vedaldi & Lenc, 2015). All MATLAB code will be made available upon acceptance of thi\nmanuscript. The pre-trained DNN models we have used are: CaffeNet (which is a variant of AlexNe\nprovided in Caffe, Jia et al., 2014), GoogLeNet (Szegedy et al., 2014), VGG-19 (Simonyan & Zis\nserman, 2014), and ResNet-152 (He et al., 2015). The models were trained on the same ImageNe\nLSVRC. The CaffeNet model was trained using Caffe with the default ImageNet training parame\nters (stopping at iteration 310, 000) and imported into MatConvNet. For the GoogLeNet model, w\nused the imported pre-trained reference-Caffe implementation. For VGG-19 and ResNet-152, w\nused the imported pre-trained original versions. In all experiments input image size was 224 x 22\nor 227 x 227."}, {"section_index": "6", "section_name": "8.2 BASELINE MODELS", "section_text": "The noiseless images were obtained from Alam et al. (2014). In main text, image scale\u201d refers to\npercent coverage of DNN input. Since size of original images (149 x 149) is smaller than DNN\ninput of (224 x 224) or (227 x 227), the images were resized by a factor of 1.5 so that 100% image\nscale covers approximately the entire DNN input area.\nHuman psychophysics and DNN experiments were done for nearly identical images. A slight dis.\ncrepancy relates to how the image is blended with the background in the special case where the\nregion where noise is added has no image surround at one or two side. In these sides (which depend\non the technical procedure with which images were obtained, see Alam et al., 2014), the surrounc\nblending here was hard, while the original was smooth."}, {"section_index": "7", "section_name": "8.4.1 SEGMENTATION", "section_text": "The images used are based on the Texture Discrimination Task (Karni & Sagi, 1991). In the variant\nconsidered here (Pinchuk-Yacobi et al., 2015), subjects were presented with a grid of lines, all of\nwhich were horizontal, except two or three that were diagonal. Subjects discriminated whether the\narrangement of diagonal lines is horizontal or vertical, and this discrimination was found to be more\ndifficult when the central line is horizontal rather than diagonal (\u2019Hard\u201d vs. \u201dEasy\u201d in Figure 2a).\nTo limit human performance in this task, two manipulations were applied: (a) the location of each\nline in the pattern was jittered, and (b) a noise mask was presented briefly after the pattern. Here we\nonly retained (a).\nA total of 90 configurations were tested, obtained by combinations of the following alternatives:\nAs baselines to compare with pre-trained DNN models, we consider: (a) a multiscale linear filter\nbank of Gabor functions, (b) a steerable-pyramid linear filter bank (Simoncelli & Freeman, 1995),\n(c) the VGG-19 model for which the learned parameters (weights) were randomly scrambled within\nlayer, and (d) the CaffeNet model at multiple time points during training. For the Gabor decom-\nposition, the following Gabor filters were used: all compositions of \u00a2 = {1,2,4,8, 16,32, 64}px,\n\\ = {1,2} -\u00ab, orientation= {0, 7/3, 27/3, 7, 47/3, 57/3}, and phase= {0, 7/2}.\ne Three scales: line length of 9, 12.3, or 19.4 px (number of lines co-varied with line length,\nsee Figure 12).\n\ne Three levels of location jittering, defined as a multiple of line length: {1, 2,3} - 0.0625 - 1\npx, where / is the length of a line in the pattern. Jittering was applied separately to each\nline in the pattern.\n\ne Ten locations of diagonal lines: center, random, four locations of half-distance from center\nto corners, four locations of half-distance from center to image borders.\nFigure 12: Pattern scales used in the different configurations of the Segmentation condition. Actua\nimages used were white-on-black rather than black-on-white."}, {"section_index": "8", "section_name": "8.4.2 CROWDING", "section_text": "The images used are motivated by the crowding effect (Livne & Sagi, 2007; Pelli et al., 2004).\nFigure 13: Pattern scales used in the different configurations of the Crowding condition. Actual\nimages used were white-on-black rather than black-on-white.\nThe images used are based on the object superiority effect by Weisstein & Harris (1974), where\ndiscriminating a line location is easier when combined with surrounding lines a shape is formed.\nA total of 90 configurations were tested, obtained by combinations of the following alternatives\nFor each configuration, the discriminated arrangement of diagonal lines was either horizontal or\nvertical, and the central line was either horizontal or diagonal (i.e. hard or easy).\ne Three scales: font size of 15.1, 20.6, or 32.4 px (see Figure 13).\n\ne Three levels of discriminated-letter location jittering, defined as a multiple of font size:\n{1, 2,3} - 0.0625 - 1 px, where / is font size. The jitter of surround letters (M, N, S, and T)\nwas fixed (i.e. the background was static).\n\ne Ten locations: center, random, four locations of half-distance from center to corners, four\nlocations of half-distance from center to image borders.\nFor each configuration, the discriminated letter was either A, B, C, D, E, or F, and the background\nwas either blank (easy) or composed of the letters M, N, S, and T (hard).\ne Three scales: discriminated-line length of 9, 15.1, or 22.7 px (see Figure 14).\n\ne Five levels of whole-pattern location jittering, defined as a multiple of discriminated-lin\nlength: {1,2,5,10,15} - 0.0625 - 1 px, where 1 is the length of the discriminated line.\nFor each configuration, the line whose location is discriminated had four possible locations (two\nlocations are shown in Figure 2c), and the surrounding background line layout could compose a\nshape (easy) or not (hard).\nFigure 14: Pattern scales used in the different configurations of the Shape condition. Actual images\nused were white-on-black rather than black-on-white.\nUsed images depicted sine gratings at different contrast, spatial frequency, sine phase, and sine\norientation combinations.\ne Six \u201chard\u201d background line layouts (patterns b-f of their Figure 2 and the additional pattern\nf of their Figure 3 in Weisstein & Harris, 1974). The \u201deasy\u201d layout was always the same\n(pattern a)."}]
HJ0NvFzxl
[{"section_index": "0", "section_name": "LEARNING GRAPHICAL STATE TRANSITIONS", "section_text": "Daniel D. Johnson\nON BO INT II EINES\nDepartment of Computer Science\nHarvey Mudd College\n\n301 Platt Boulevard\nGraph-structured data is important in modeling relationships between multiple\nentities, and can be used to represent states of the world as well as many date\nstructures. describe a model known as a Gated Graph Sequence\nNeural Network (GGS-NN) that produces sequences from graph-structured input\nIn this work I introduce the Gated Graph Transformer Neural Network (GGT.\nNN), an extension of GGS-NNs that uses graph-structured data as an intermediate\nrepresentation. The model can learn to construct and modify graphs in sophisti-\ncated ways based on textual input, and also to use the graphs to produce a variety\nof outputs. For example, the model successfully learns to solve almost all of the\n\nbADI tasks 2016), and also discovers the rules governing graphical\n\nformulations of a simple cellular automaton and a family of Turing machines."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many different types of data can be formulated using a graph structure. One form of data that lends\nitself to a graphical representation is data involving relationships (edges) between entities (nodes).\nAbstract maps of places and paths between them also have a natural graph representation, where\nplaces are nodes and paths are edges. In addition, many data structures can be expressed in graphical\nform, including linked lists and binary trees.\nSubstantial research has been done on producing output when given graph-structured input (Kashim<\n(2015). Of particular relevance to this work are Graph Neural Networks (Gori et al.|/2005}/S\n(2009), or GNNs, which extend recursive neural networks by assigning states to each node ir\na graph based on the states of adjacent nodes. Recently |Li et al.|(2016) have modified GNNs to us\u00a2\ngated state updates and to produce output sequences. The resulting networks, called GG-NNs anc\nGGS-NNs. are successful at solving a varietv of tasks with sraph-structured input.\nThe current work further builds upon GG-NNs and GGS-NNs by allowing graph-structured inte1\nmediate representations, as well as graph-structured outputs. This is accomplished using a mor\nflexible graph definition, along with a set of graph transformations which take a graph and othe\ninformation as input and produce a modified version of the graph. This work also introduces th\nGated Graph Transformer Neural Network model (GGT-NN), which combines these transforma\ntions with a recurrent input model to incrementally construct a graph given natural language input\nand can either produce a final graph representing its current state, or use the graph to produce\nnatural language output.\nExtending GG-NNs in this way opens up a wide variety of applications. Since many types of dat\ncan be naturally expressed as a graph, it is possible to train a GGT-NN model to manipulate\nmeaningful graphical internal state. In this paper I demonstrate the GGT-NN model on the bAt\ntask dataset, which contains a set of stories about the state of the world. By encoding this state a\na graph and providing these graphs to the model at training time, a GGT-NN model can be traine\nto construct the correct graph from the input sentences and then answer questions based on thi\ninternal graph. I also demonstrate that this architecture can learn complex update rules by trainin\nit to model a simple 1D cellular automaton and arbitrary 4-state Turing machines. This requires th\nnetwork to learn how to transform its internal state based on the rules of each task."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Nodes Connectivity\n\n/ Annotation Strength State \\ ( Destination\n\na\n12 67 |\nBBAaha\nBROAN wD\nSaasanyh\nQ ny\nao f eneneene fa\nBaahAh\nTy 1) ACA GaCAGh |\n\nSf\n\nSource\nNookens\n&\nGq\ney\nfea\niJ\n\n[froanene\n/\n[fr7~OArons\n\n/\n\nAnnotation\npeeeee\n\nNodes\n\nStrength\n\nState\n\nConnectivity\n\na Destination\n\nSource\nNookens\n&\na\ney\nfea\niJ\n\n21a AGH Ga Gh Gh\n\n\u201c/\nFigure 1: Diagram of the differentiable encoding of a graphical structure, as described in section\nOn the left, the desired graph we wish to represent, in which there are 6 node types (shown as blue,\npurple, red, orange, green, and yellow) and two edge types (shown as blue/solid and red/dashed).\nNode 3 and the edge between nodes 6 and 7 have a low strength. On the right, depictions of the\nnode and edge matrices: annotations, strengths, state, and connectivity correspond to X,, Sy, hy,\nand C, respectively. Saturation represents the value in each cell, where white represents 0, and\nfully saturated represents 1. Note that each node\u2019s annotation only has a single nonzero entry,\ncorresponding to each node having a single well-defined type, with the exception of node 3, which\nhas an annotation that does not correspond to a single type. State vectors are shaded arbitrarily\nto indicate that they can store network-determined data. The edge connectivity matrix C is three\ndimensional, indicated by stacking the blue-edge cell on top of the red-edge cell for a given source-\ndestination pair. Also notice the low strength for cell 3 in the strength vector and for the edge\nbetween node 6 and node 7 in the connectivity matrix."}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "Gated Recurrent Units (GRU) are a type of recurrent network cell introduced by (2014)\nEach unit uses a reset gate r and an update gate z, and updates according to\nr) =o (W,x +U,h&-) + b,) 2) =o (W.x + Uh +b.)\nh = 6(Wx+ U(r oh) +b) h\u00ae =z@h-) + (1-2) oh\nr) =o (W,x +U,h&-) + b,) 2) =o (Wx + Uh) +b.)\nh\u00ae = 4(Wx + U(r @h-Y) +b) h\u00ae =z@h-) + (1-2) oh"}, {"section_index": "4", "section_name": "2.2 GG-NN AND GGS-NN", "section_text": "The Gated Graph Neural Network (GG-NN) is a form of graphical neural network model described\nby (019. In a GG-NN, a graph G = (V,\u20ac) consists of a set V of nodes v with unique\nvalues and a set \u20ac of directed edges e = (v,v\u2019) \u20ac V x V oriented from v to v\u2019. Each node has ar\nannotation x, \u20ac R\u2122 and a hidden state h, \u20ac R?, and each edge has a type ye \u20ac {1,--- , M}.\nGG-NNs operate by first initializing the state h,, of each node to correspond to the annotation x,,.\nThen, a series of propagation steps occur. In each step, information is transferred between nodes\nacross the edges, and the types of edge determine what information is sent. Each node sums the\ninput it receives from all adjacent nodes, and uses that to update its own internal state, in the same\nmanner as a GRU cell. Finally, the states of all nodes are used either to create a graph-level aggregate\noutput, or to classify each individual node.\nGGS-NNs extend GG-NNs by performing a large number of propagation-output cycles. At each\nstage, two versions of the GG-NN propagation process are run. The first is used to predict an output\nfor that timestep, and the second is used to update the annotations of the nodes for the next timestep.\nThis allows GGS-NNs to predict a sequence of outputs from a single graph.\nwhere o is the logistic sigmoid function, \u00a2 is an activation function (here tanh is used), x) is the\ninput vector at timestep \u00a2, h() is the hidden output vector at timestep t, and W, U, W,, U,, W.,\nU., b, b, and b, are learned weights and biases. Note that \u00a9 denotes elementwise multiplication.\nFigure 2: Summary of the graph transformations. Input and output are represented as gray squares\na) Node addition (Taga), Where the input is used by a recurrent network (white box) to produce nev\nnodes, of varying annotations and strengths. b) Node state update (Tj), where each node receive:\ninput (dashed line) and updates its internal state. c) Edge update (7c), where each existing edge\n(colored) and potential edge (dashed) is added or removed according to the input and states of the\nadjacent nodes (depicted as solid arrows meeting at circles on each edge). d) Propagation (Tprop)\nwhere nodes exchange information along the current edges, and update their states. e) Aggregatior\n(Trepr)s where a single representation is created using an attention mechanism, by summing informa\ntion from all nodes weighted by relevance (with weights shown by saturation of arrows)."}, {"section_index": "5", "section_name": "3. DIFFERENTIABLE GRAPH TRANSFORMATIONS", "section_text": "In this section, I describe some modifications to the graph structure to make it fully differentiable\nand then propose a set of transformations which can be applied to a graph structure in order t\ntransform it. In particular, I redefine a graph G = (V,C) \u20ac Tas aset V of nodes v, and a connectivit\nmatrix C \u20ac RIVIxIVIXY where Y is the number of possible edge types. As before, each node ha\nan annotation x, \u20ac RY and a hidden state h, \u20ac R?. However, there is an additional constraint tha\na @y,j = 1. One can then interpret x,,; as the level of belief that v should have type j out o\nN possible node types. Each node also has a strength s,, \u20ac [0,1]. This represents the level of belie\nthat node v should exist, where s,, = 1 means the node exists, and s,, = 0 indicates that the nod\nshould not exist and thus should be ignored.\nSimilarly, elements of C are constrained to the range [0, 1], and thus one can interpret Cy,v',y as the\nlevel of belief that there should be a directed edge of type y from v to v\u2019. (Note that it is possible\nfor there to be edges of multiple types between the same two nodes v and v\u2019, i.e. it is possible for\nCow y = Co,v'y = 1 where y F y\u2019.) Figure[I]shows the values of x,, s,, h,,, and C corresponding\nto a particular graphical structure.\nThere are five classes of graph transformation:"}, {"section_index": "6", "section_name": "4 GATED GRAPH TRANSFORMER NEURAL NETWORK (GGT-NN)", "section_text": "In this section I introduce the Gated Graph Transformer Neural Network (GGT-NN), which is con-\nstructed by combining a series of these transformations. Depending on the configuration of the\ntransformations, a GGT-NN can take textual or graph-structured input, and produce textual or graph-\na) Node addition (Tgaa), which modifies a graph by adding new nodes and assigning then\nannotations x,, and strengths s,, based on an input vector.\n\nb) Node state update (7},), which modifies the internal state of each node using an input vecto\n(similar to a GRU update step). Optionally, different input can be given to nodes of eacl\ntype, based on direct textual references to specific node types. This version is called a direc\nreference update (Th irect)-\n\nc) Edge update (7c), which modifies the edges between each pair of nodes based on the inter\nnal states of the two nodes and an external input vector.\n\nd) Propagation (Tprop), which allows nodes to trade information across the existing edges anc\nthen update their internal states based on the information received.\n\ne) Aggregation (Trepr), Which uses an attention mechanism to select relevant nodes and ther\ngenerates a eraph-level output.\nEach transformation has its own trainable parameters. Together, these transformations can be com-\nbined to process a graph in complex ways. An overview of these operations is shown in Figure [2]\nFor details about the implementation of each of these transformations, see Appendix |B]\nAlgorithm 1 Graph Transformation Pseudocode\nstructured output. Here I describe one particular GGT-NN configuration, designed to build an\nmodify a graph based on a sequence of input sentences, and then produce an answer to a query.\nWhen run, the model performs the following: For each sentence k, each word is converted to a\none-hot vector wi, ), and the sequence of words (of length L) is passed through a GRU layer to\nproduce a sequence of partial-sentence representation vectors Pi The full sentence representation\nvector i\u201d) is initialized to the last partial representation vector p L Ch) . Furthermore, a direct-reference\ninput matrix D\u201c) is set to the sum of Partial representation vectors corresponding to the words that\n\ndirectly reference a node type, i.e. pb? He = ier, pi\" ) where R,, is the set of words in sentence k\nthat directly refer to node type n. This acts like an \u2018attention mechanism, by accumulating the partial\nrepresentation vectors for the words that directly reference each type, and masking out the vectors\ncorresponding to other words.\nNext, a series of graph transformations are applied, as depicted in Algorithm|[I] Depending on the\ntask, direct reference updates and per-sentence propagation can be enabled or disabled. The output\nfunction foutpur Will depend on the specific type of answer desired. If the answer is a single word,\nfoutpur Can be a multilayer perceptron followed by a softmax operation. If the answer is a sequence\nof words, foupu: CaN use a recurrent network (such as a GRU) to produce a sequence of outputs.\nNote that transformations with different superscripts (Th and 7,\"\"\", for instance) refer to similar\ntransformations with different learned weights.\nSince the processing of the input and all of the graph transformations are differentiable, at this poin\nthe network output can be compared with the correct output for that query and used to update the\nnetwork parameters, including both the GRU parameters used when processing the input and the\ninternal weights associated with each transformation."}, {"section_index": "7", "section_name": "4.1 SUPERVISION", "section_text": "As with many supervised models, one can evaluate the loss based on the likelihood of producing\nan incorrect answer, and then minimize the loss by backpropagation. However, based on initial\nexperiments, the model appeared to require additional supervision to extract meaningful graph-\nstructured data. To provide this additional supervision, I found it beneficial to provide the correct\ngraph at each timestep and train the network to produce that graph. This occurs in two stages, first\nwhen new nodes are proposed, and then when edges are adjusted. For the edge adjustment, the edge\nloss between a correct edge matrix C* and the computed edge matrix C is given by\nLedge = \u2014C*- In(C) \u2014 (1 \u2014C*)- Ind \u2014C).\nThe node adjustment is slightly more complex. Multiple nodes are added in each timestep, but the\norder of those nodes is arbitrary, and only their existence is important. Thus it should be possible for\nthe network to determine the optimal ordering of the nodes. In fact, this is important because there\nis no guarantee that the nodes will be ordered consistently in the training data.\n(2016) demonstrate a simple method for training a network to output unordered sets:\nthe network produces a sequence of outputs, and these outputs are compared with the closest order-\neee I III ODI III DN EDI\n\nGee MU: G = Taaa(G, fi hG\"])\n2: for k from 1 to K do 12 Ge Te(G,i\u2122)\n\n3: GHTa(G,i\u2122) 13: end for\n\n4 if direct reference enabled then 14. GH TN (G, i\")\n\n5 G & Tadiret(G, D\u2122) 15: if direct reference enabled then\n6: end if 16: Ge Teiset G,D\u2122\u201d)\n\n7: if intermediate propagation enabled then 17: end if\n\n8 G = Trop (G) 18: G \u2014 Tprop\u201d (G)\n\n9 end if 19: haw\" 7. (G)\n\n0: hg\" \u00a9 Trepr(G) 20: return foupu(hg\u201d\u201d)\n13:\n14:\n15:\n16:\n17:\n18: Go\n19:\n20:\n\nend for\n\nGT (G5)\n\nif \u201coe reference enabled t\n\u00a9 Thidieot(G, D\u2122**)\n\nend i if\n\nTeo )\n\nhg \u2014 722\" (G)\n\nreturn foupu (hg?)\ninput matrix D\u2018\"\u2019 is set to the sum of partial representation vectors corresponding to the words that\ning of the training data, i.e., the ordering of the training data which would produce the smallest loss\n\nwhen compared with the network output. |Vinyals et al.| show that when using this method, the net-\nwork arbitrarily chooses an ordering which may not be the optimal ordering for the task. However,\n\nin this case any ordering should be sufficient, and I found the arbitrary orderings selected in this\nway to work well in practice. In particular, letting Srv) and Xv) denote the correct strength and\n\nannotations of node v under ordering 7. the loss becomes\n|Vrew |\nLode = \u2014 max Ss Srv) M(sv) + (1 = sh(4)) MC \u2014 sv) + XF(y) - n(Xe)\nv=|Vou|+1"}, {"section_index": "8", "section_name": "}.2 OTHER TRANSFORMATION CONFIGURATIONS", "section_text": "The structure described in Algorithm [T] is designed for question-answering tasks. However, due\nto the composability of the individual graph transformations, other configurations could be used to\nsolve other tasks that operate on structured data."}, {"section_index": "9", "section_name": "5.1 BABI TASKS", "section_text": "I evaluated the GGT-NN model on the bAbI tasks, a set of simple natural-language tasks, where each\ntask is structured as a sequence of sentences followed by a query (Weston et al.||2016). The gener.\nation procedure for the bAbI tasks includes a \u201cKnowledge\u201d object for each sentence, representing\nthe current state of knowledge after that sentence. I exposed this knowledge object in graph format.\nand used this to train a GGT-NN in supervised mode. The knowledge object provides names fot\neach node type, and direct reference was performed based on these names: if a word in the sentence\nmatched a node type name, it was parsed as a direct reference to all nodes of that type. For details\non this graphical format, see Appendix|C]"}, {"section_index": "10", "section_name": "5.1.1 ANALYSIS AND RESULTS", "section_text": "Results are shown in Tables [I] and [2] The GGT-NN model was able to reach 95% accuracy in al\nbut one of the tasks, and reached 100% accuracy in eleven of them (see Table [2). Additionally, for\nfourteen of the tasks, the model was able to reach 95% accuracy using 500 or fewer of the 100C\ntraining examples (see Table[I).\nThe only task that the GGT-NN was unable to solve with 95% accuracy was task 17 (Positional\nReasoning), for which the model was not able to attain a high accuracy. Task 17 has a larger number\nFor instance, if a task consists of tracking relationships between a fixed set of objects, one could\nconstruct a version of the model that does not use the new-nodes transformation (Taga), but instead\nonly modifies edges. If the task was to extract information from an existing graph, a structure similar\nto the GGS-NNs could be built by using only the propagation and aggregation transformations. If the\ntask was to construct a graph based on textual input, the query processing steps could be omitted, and\ninstead the final graph could be returned for processing. And if information should be gathered from\na sequence of graphs instead of from a single graph, the query processing steps could be modified\nto run in parallel on the full sequence of graphs and extract information from each graph. This last\nmodification is demonstrated in Appendix |D]\n| trained two versions of the GGT-NN model for each task: one with and one without direct refer-\nence. Tasks 3 and 5, which involve a complex temporal component, were trained with intermediate\npropagation, whereas all of the other tasks were not because the structure of the tasks made such\ncomplexity unnecessary. Most task models were configured to output a single word, but task 19\n(pathfinding) used a GRU to output multiple words, and task 8 (listing) was configured to output a\nstrength for each possible word to allow multiple words to be selected without having to consider\nordering.\nTable 1: Number of training examples needed before the GGT-NN model could attain < 5% errot\non each of the bAbI tasks. Experiments were run with 50, 100, 250, 500, and 1000 examples.\n\u201cGGT-NN + direct ref.\u201d denotes the performance of the model with direct reference, and \u201c\u201cGGT-\nNN\u201d denotes the performance of the model without direct reference. Dashes indicate that the model\nwas unable to reach the desired accuracy with 1000 examples.\nTable 2: Error rates of various models on the bAbI tasks. Bold indicates < 5% error. For descriptions\nof each of the tasks, see Table[I] \u201cGGT-NN + direct ref\u2019 denotes the GGT-NN model with direct\nreference, and \u201cGGT-NN\u201d denotes the version without direct reference. See text for details regarding\nthe models used for comparison. Results from LSTM and MemNN reproduced from|Weston et al.\n(2016). Results from other existing models reproduced from{Henaff et al.|(2016).\nZs Zz Zs Z\n\nae OO ae OF\n\nee 2 ee a\n\nOs ie) or Qo\n\nTask or 1c] Task or oO\n1 - Single Supporting Fact 100 =: 1000 11 - Basic Coreference 100 ~=1000\n2 - Two Supporting Facts 250 - 12 - Conjunction 500 = 1000\n3 - Three Supporting Facts | 1000 - 13 - Compound Coref. 100 ~=1000\n\n4 - Two Arg. Relations 1000 = 1000 14 - Time Reasoning 1000 -\n5 - Three Arg. Relations 500 - 15 - Basic Deduction 500 500\n6 - Yes/No Questions 100 - 16-B: Induction 100 500\n\n7 - Counting 250 - 17 - Positional Reasoning - -\n\n8 - Lists/Sets 250 ~=1000 18 - Size Reasoning 1000 -\n\n9 - Simple Negation 250 - 19 - Path Finding 500 -\n10 - Indefinite Knonwledoe 1000 i IN. Avent\u2019c Mativatinne 957 957\n1,000 examples 10,000 examples\nZz Zz\n\nZi Og ge 8 sx z 3 2 3\nwh & 2\u00a3 \u20ac@ \u20ac zg| = & \u20ac 9 2 gz\nOs Oo S 3 e| & & S$ ZB 2\nTask | O* o 4 = = i) c4 a = a a f\n1 0 0.7 50.0 0 0 07) 315 44 0 0 0 0\n2 0 5.7 80.0 0 83 564] 545 275 03 04 03 O01\n3 1.3 12.0 80.0 0 40.3 69.7 | 43.9 71.3 21 1.8 11 41\n4 12 2.2 39.0 0 28 1.4 0 0 0 0 0 0\n5 1.6 109 300 20 13.1 46) 08 17 O08 08 05 0.3\n6 0 7.7 52.0 0 7.6 30.0 | 17.1 15\u00b0 01 0 0 0.2\n7 0 56 510 150 173 223/)178 60 20 06 2.4 0\n8 0 33 550 90 100 19.2 | 13.8 17\u00b0 09 03 0 05\n9 0 116 360 0 13.2 315] 164 06 03 0.2 0 01\n10 3.4 286 560 20 15.1 156] 166 19.8 0 0.2 0 06\n11 0 0.2 28.0 0 09 8.0 | 15.2 0 0 0 0 03\n12 0.1 0.7 26.0 0 02 08 89 6.2 0 0 0.2 0\n13 0 08 60 0 04 90] 74 7.5 0 0 0 13\n14 2.2 55.1 73.0 10 17 629] 242 175 O02 04 0.2 0\n15 0.9 0 79.0 0 0 57.8 | 47.0 0 0 0 0 0\n16 0 0 77.0 0 13 53.2 | 536 49.6 51.8 55.1 45.3 0.2\n17 34.5 48.0 49.0 35.0 51.0 464 | 25.5 12 186 120 42 05\n18 2.1 106 48.0 50 11.1 8.8 2.2 02 53 O08 21 03\n19 0 70.6 92.0 640 828 904] 43 395 2.3 3.9 0 23\n20 0 10 90 0 0 2.6 15 0 0 0 0 0\nof possible entities than the other tasks: each entity consists of a color (chosen from five options)\nand a shape (chosen from four shapes), for a total of 20 unique entities that must be representec\nseparately. Additionally, the stories are much shorter than those in other tasks (2 facts for each set\nof 8 questions). It is likely that these additional complexities caused the network performance tc\nsuffer.\nFor comparison, accuracy on the bAbI tasks is also included for a simple sequence-to-sequence\nLSTM model and for a variety of existing state-of-the-art approaches (see Table 2): a simple\nsequence-to-sequence LSTM model, as implemented in (2016), a modified Mem-\nory Network model (MemNN, |Weston et al} 2016), End-To-End Memory Network (MemN2N,\n\nMachine (NTM, 2014), Dynamic NTM (D-NTM, , a larger\nversion of the MemN2N model with weight tying and nonlinearity (MemN2N*,|Sukhbaatar et al.\n\n(2015), Differentiable Neural Computer (DNC, [Graves et al.|/2016), and Dynamic Memory Networl\n(DMN+. [Xiong et al] (2016). Although the GGT-NN model was trained using only 1,000 training\nexamples, results using 10,000 examples have also been reproduced here for comparison. Also, it\nis important to note that the GGT-NN and MemNN models were trained with strong supervision:\nthe GGT-NN model was trained with full graph information, and the MemNN model was trained\nwith information on which sentences were relevant to the query. All other models were trained\nend-to-end without additional supervision.\nSince the GGT-NN and MemNN models are both strongly supervised, it is interesting to note tha\neach approach outperforms the other on a subset of the tasks. In particular, the GGT-NN model wit\ndirect reference attains a higher level of accuracy on the following tasks, with an improvement o:\n0.4-64% depending on the task: task 5 (0.4%), task 7 (15%), task 8 (9%), task 17 (0.5%), task 1\u00e9\n(2.9%), and task 19 (64%). This may indicate that a graphical representation is superior to a list o:\nsentence memories for solving these tasks. On the other hand, the MemNN model outperforms the\nGGT-NN model (0.1-2.9% greater accuracy) on tasks 3, 4, 10, 12, 14, and 15.\nOf particular interest is the performance on task 19, the pathfinding task, for which the GGT-ND\nmodel with direct reference performs better than all but one of the other models (DMN+), anc\nshows a large improvement over the performance of the MemNN model. This is reasonable, sinc:\npathfinding is a task that is naturally suited to a graphical representation. The shortest path betwee!\ntwo nodes can be easily found by sending information across all paths away from one of the nodes i1\na distributed fashion, which the GGT-NN model allows. Note that the preexisting GGS-NN mode\n(discussed in Section [2.2) was also able to successfully learn the pathfinding task, but required the\ninput to be preprocessed into graphical form even when evaluating the model, and thus could no\nbe directly evaluated on the textual form of any of the bAbI tasks 2016). The curren\nresults demonstrate that the proposed GGT-NN model is able to solve the pathfinding task whe\ngiven textual input.\nSimilarly, both variants of the GGT-NN model show improvement over many other models on task\n16, the induction task. Solving the induction task requires being able to infer relationships based on\nsimilarities between entities. (One example from this task: Lily is a swan. Lily is white. Bernhard\nis green. Greg is a swan. What color is Greg? A:white.) In a graphical setting, this can be done\nby following a sequence of edges (Greg + swan \u2014 Lily \u2014 white), and the performance of the\nGGT-NN model indicates that this task is particularly suited to such a representation.\nIn general, the GGT-NN model with direct reference performs better than the model without it. The\nmodel with direct reference reaches 95% accuracy on 19/20 of the bAbI tasks, while the mode\nwithout direct reference reaches that level of accuracy on 9/20 of the tasks (see Table 2). Addi-\ntionally, when compared to the direct-reference model, the model without direct reference requires\nmore training examples in order to reach the accuracy threshold (see Table [Ip. This indicates that.\nalthough the model can be used without direct reference, adding direct reference greatly improves\nthe training of the model."}, {"section_index": "11", "section_name": "5.2 RULE DISCOVERY TASKS", "section_text": "To demonstrate the power of GGT-NN to model a wide variety of graph-based problems, I applied\nthe GGT-NN to two additional tasks. In each task, a sequence of data structures were transformed\ninto a graphical format, and the GGT-NN was tasked with predicting the data for the next timestep\nTable 3: Accuracy of GGT-NN on the Rule 30 Automaton and Turing Machine tasks.\nFigure 3: Visualization of network performance on the Rule 30 Automaton task. Top node (purple)\nrepresents zero, bottom node (blue) represents 1, and middle nodes (green, orange, and red) repre-\nsent individual cells. Blue edges indicate adjacent cells, and gold edges indicate the value of each\ncell. Three timesteps occur between each row.\nbased on the current timestep. No additional information was provided as textual input; instead, the\nnetwork was tasked with learning the rules governing the evolution of the graph structure over time."}, {"section_index": "12", "section_name": "5.2.1 CELLULAR AUTOMATON TASK", "section_text": "The first task used was a 1-dimensional cellular automaton, specifically the binary cellular automa:\nton known as Rule 30 (Wolfram| {2002). Rule 30 acts on an infinite set of cells, each with a binary\nstate (either 0 or 1). At each timestep, each cell deterministically changes state based on its previou:\nstate and the states of its neighbors. In particular, the update rules are\nCell states can be converted into graphical format by treating the cells as a linked list. Each of the\ncells is represented by a node with edges connecting it to the cell\u2019s neighbors, and a value edge is\nused to indicate whether the cell is 0 or 1. This format is described in more detail in Appendix|C]"}, {"section_index": "13", "section_name": "5.2.2. TURING MACHINES", "section_text": "The second task was simulating an arbitrary 2-symbol 4-state Turing machine. A Turing machine\noperates on an infinite tape of cells, each containing a symbol from a finite set of possible symbols.\nIt has a head, which points at a particular cell and can read and write the symbol at that cell. It also\nhas an internal state, from a finite set of states. At each timestep, based on the current state and the\ncontents of the cell at the head, the machine writes a new symbol, changes the internal state, and can\nmove the head left or right or leave it in place. The action of the machine depends on a finite set of\nrules, which specify the actions to take for each state-symbol combination. Note that the version of\nTuring machine used here has only 2 symbols, and requires that the initial contents of the tape be all\n0 (the first symbol) except for finitely many Is (the second symbol).\nWhen converting a Turing machine to graphical format, the tape of the machine is modeled as <\nlinked list of cells. Additionally, each state of the machine is denoted by a state node, and edge:\nbetween these nodes encode the transition rules. There is also a head node, which connects both t\u00ab\nthe current cell and to the current state of the machine. See Appendix|C]for more details."}, {"section_index": "14", "section_name": "5.2.3. ANALYSIS AND RESULTS", "section_text": "Original Task Generalization: 20 Generalization: 30\n000 iterations\n\n\u00a9 tee\n\n0 cepgeges\n\n\u00a9 ced edited tes\n\n2000 iterations\n\n3000 iterations\n\n7000 iterations\n\nGround truth\n\u00ab cotndapdaes\nCurrent neighborhood\nNext value\n\n111 | 110 | 101 | 100 | O11 | 010 | 001 | 000\n0 0 0 1 1 1 1 0\nThe GGT-NN model was trained on 1000 examples of the Rule 30 automaton with different ini-\ntial states, each of which simulated 7 timesteps of the automaton, and 20,000 examples of Turing\nmachines with different rules and initial tape contents, each of which simulated 6 timesteps of the\nTuring machine. Performance was then evaluated on 1000 new examples generated with the same\nformat. The models were evaluated by picking the most likely graph generated by the model, and\ncomparing it with the correct graph. The percent accuracy denotes the fraction of the examples for\nwhich these two graphs were identical at all timesteps. In addition to evaluating the performance on\nidentical tasks, the generalization ability of the models was also assessed. The same trained models\nwere evaluated on versions of the task with 20 and 30 timesteps of simulation.\nResults are shown in Table [3] The models successfully learned the assigned tasks, reaching high\nlevels of accuracy for both tasks. Additionally, the models show the ability to generalize to large\ninputs, giving a perfect output in the majority of extended tasks. For visualization purposes, Figure\n[3]shows the model at various stages of training when evaluated starting with a single 1 cell.\nGi es et al. (1992) describe a method for extracting a finite state machine from a trained recurren\nneural network by quantizing the hidden states of the network, recording all possible state transi\ntions, and using them to construct a minimal directed graph representing the state machine. Thi:\nmethod, however, requires postprocessing of the network to extract the graph, and is limited to ex.\ntracting graphs that represent state machines. Additionally, although the FSM extraction methoc\ndescribed by [Giles et al.} ) and the GGT-NN model both produce graphs using neural networks\nthe goals are different: the FSM extraction method aims to learn a single graph that can classify\nsequences, whereas the GGT-NN model aims to learn a neural network that can manipulate graphs.\n[he lifted relational neural network (LRNN) is another approach to working with structured dat\nSourek et al. ). LRNNs require the input to be formatted as a combination of weighted pred\ncate logic statements, encompassing both general rules and specific known facts. For each trainin\n-xample, the statements are used to construct a \u201cground neural network\u201d, with a connection patter\nletermined by the dependencies between the statements. LRNNs can learn to extract information b\nidjusting the weights of each statement, but require the rules to be composed by hand based on th\nask structure. Furthermore, unlike in GGT-NNs, a LRNN has no internal state associated with th\nybjects it describes (which are instead represented by single neurons), and the relationships betwee\n\u00bbbjects cannot be constructed or modified by the network.\nMultiple recent architectures have included differentiable internal states. Memory Networks, as de-\nscribed in ( ), and the fully differentiable end-to-end memory networks, described\nin Soinsasr eal 013 utilize a differentiable long-term memory component, consisting\nof a set of memori re produced by encoding the input sentences. To answer a query, an\nattention mechanism is used to select a subset of these memories, and the resulting memories are\nprocessed to produce the desired output. Differentiable Neural Computers (DNCs), described in\n|Graves et al.|( ), interact with a fixed-size memory using a set of read and write \u201cheads\u201d, which\n\ncan be moved in the memory either by searching for particular content or by following temporal\n\u201clinks of association\u201d that track the order in which data was written.\nMemory networks and DNCs share with the GGT-NN model the ability to iteratively construct\nan internal state based on textual input, and use that internal state to answer questions about the\nunderlying structured data. However, in these models, the structure of the internal state is implicit:\nalthough the network can store and work with structured data, the actual memory consists of a set\nof vectors that cannot be easily interpreted, except by monitoring the network access patterns. The\nGGT-NN model, on the other hand, explicitly models the internal state as a graph with labeled\nMany methods have been proposed for combining neural networks with graphs. These methods gen-\nerally require the input to the network to be in graphical format. For instance, GNNs and GGS-NNs\ntake a graph as input, and propagate information between nodes according to the graph structure\n(Gori et al. 2005} Scarselli et al.| 2009} Li et al.| 2016). Similarly, graph convolutional networks\nextract information from an existing graph structure by using approximations to spectral graph con-\nvolutions (Kipf & Welling] (2016). These methods are similar to GGT-NNs in that they all store\ninformation in the nodes of a graph and use edges to determine how information flows. However.\nthey all use a graph with fixed structure, and can only accept graphical data. The GGT-NN model.\non the other hand, allows the graph structure to be built and modified based on unstructured input.\nnodes and edges. This allows the produced graph to be extracted, visualized, and potentially used i\ndownstream applications that require graph-structured input.\nHierarchical Attentive Memory (HAM) is a memory-based architecture that consists of a binary tre\nbuilt on top of an input sequence (Andrychowicz & Kurach\\ [2016). A recurrent controller accesse:\nthe HAM module by performing a top-down search through the tree, at each stage choosing t\nattend to either the left or right subtrees. Once this process reaches a leaf, the value of the leaf i\nprovided to the controller to use in predicting the next output, and this leaf\u2019s value can be update:\nwith a new value. This architecture is especially suited toward sequence-based tasks, and has bee!\nshown to generalize to longer sequences very efficiently due to the tree structure. However, it i\nunclear whether a HAM module would work well with non-sequential structured data, since the tre\nstructure is fixed by the network.\nOne advantage of the GGT-NN model over existing works is that it can process data in a distributed\nfashion. Each node independently processes its surroundings, which can be beneficial for complex\ntasks such as pathfinding on a graph. This is in contrast to memory networks, DNCs, and HAM\nmodules, which are restricted to processing only a fixed number of locations in a given timestep.\nOn the other hand, the distributed nature of the GGT-NN model means that it is less time and space\nefficient than these other networks. Since every node can communicate with every other node, the\ntime and space required to run a GGT-NN step scales quadratically with the size of the input. A\nDNC or memory network, on the other hand, either scales linearly (since it attends to all stored\ndata or memories) or is constant (if restricted to a fixed-size memory), and a HAM module scales\nlogarithmically (due to the tree structure)."}, {"section_index": "15", "section_name": "7 CONCLUSION", "section_text": "The results presented here show that GGT-NNs are able to successfully model a wide variety o:\ntasks using graph-structured states and potentially could be useful in solving many other types o:\nproblems. The specific GGT-NN model described here can be used as-is for tasks consisting 0!\na sequence of input sentences and graphs, optionally followed by a query. In addition, due to the\n\nmodular nature of GGT-NNs, it is possible to reconfigure the order of the transformations to produc\u00a2\na model suitable for a different task.\nThe GGT-NN architecture has a few advantages over the architectures described in existing works\nIn contrast to other approaches to working with structured data, GGT-NNs are designed to work with\nunstructured input, and are able to modify a graphical structure based on the input. And in contras\n\nto memory networks or DNCs, the internal state of the network is explicitly graph structured, anc\ncomplex computations can be distributed across the nodes of the graph.\nOne downside of the current model is that the time and space required to train the model increas\u00a2\nvery quickly as the complexity of the task increases, which limits the model\u2019s applicability. It woulc\nbe very advantageous to develop optimizations that would allow the model to train faster and witk\nsmaller space requirements, such as using sparse edge connections, or only processing some subse\nof the nodes at each timestep. Another promising direction of future work is in reducing the level o!\nsupervision needed to obtain meaningful graphs, for example by combining a few examples that have\nfull graph-level supervision with a larger set of examples that do not have graph-level information\nor using additional regularization to enable the GGT-NN model to be trained without any grapt\ninformation."}, {"section_index": "16", "section_name": "ACKNOWLEDGMENTS", "section_text": "I would like to thank Harvey Mudd College for computing resources. I would also like to thank th\ndevelopers of the Theano library, which I used to run my experiments. This work used the Extrem:\nScience and Engineering Discovery Environment (XSEDE), which is supported by National Scienc:\nFoundation grant number ACI-1053575.\nThere are exciting potential uses for the GGT-NN model. One particularly interesting application\nwould be using GGT-NNs to extract graph-structured information from unstructured textual de-\nscriptions. More generally, the graph transformations provided here may allow machine learning to\ninteroperate more flexibly with other data sources and processes with structured inputs and outputs."}, {"section_index": "17", "section_name": "REFERENCES", "section_text": "Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally\nconnected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.\nKyunghyun Cho, Bart Van Merri\u00e9nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol-\nger Schwenk, and Yoshua Bengio. Learning phrase representations using mn encoder-decode:\nfor statistical machine translation. arXiv preprint arXiv: 1406.1078, 2014.\nDavid K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan\nAspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular\nfingerprints. In Advances in Neural Information Processing Systems, pp. 2224-2232, 2015.\nMikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. Tracking the world\nstate with recurrent entity networks. arXiv preprint arXiv: 1612.03969, 2016.\nHisashi Kashima, Koji Tsuda, and Akihiro Inokuchi. Marginalized kernels between labeled graphs.\nIn ICML, volume 3, pp. 321-328, 2003.\nYujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural\nnetworks. ICLR, 2016.\nFranco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.\nThe eraph neural network model. JEEE Transactions on Neural Networks. 20(1):61\u201480. 2009.\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances\nin neural information processing systems, pp. 2440-2448, 2015.\nThomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional net-\nworks. arXiv preprint arXiv: 1609.02907, 2016.\nJason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri\u00e9nboer, Armand\nJoulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy\ntasks. ICLR, 2016.\nStephen Wolfram. A new kind of science, volume 5. Wolfram media Champaign, 2002.\nCaiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual an\ntextual question answering. In Proceedings of The 33rd International Conference on Machin\nLearning, pp. 2397-2406, 2016.\nOriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to sequence for sets\nICLR. 2016."}, {"section_index": "18", "section_name": "APPENDIX A BACKGROUND ON GG-NNS AND GGS-NNS", "section_text": "1 t t-1)T t-1)T\nns = [x!,o]7 ay) = ALA PT GT\n2) = o(W.a\\? + Unf?) r\\) = o(W,as? + U, hi?)\n\nni) = tanh(Wal?) + U(r? ohY\u2014Y)) nh?) = 1 = 26?) onl? 4.2 On\nHere ay\u2019 represents the information received by each node from its neighbors in the graph, and the\nmatrix A \u20ac R?!VIx2!V1 has a specific structure that determines how nodes communicate. The firs\nhalf of A, denoted Ae\u201c) \u20ac R?IVIXPIVI, corresponds to outgoing edges, whereas the second hal:\n\nA(in) \u20ac RPIVIXPIY| corresponds to incoming edges.\nM\nal!) = Ss (> Sedge(U, VY) \u00a9 Py + Sedge(v\", v, y) \u00a9 v,) nl)\n\nv'eEV \\y=l\nwhere Segve(v, v\u2019, y) is 1 if e = (v,v\u2019) \u20ac E and y. = y, and 0 otherwise\nThe output from a GG-NN is flexible depending on the task. For node selection tasks, a node score\n\noy = g(h\u201d, x,) is given for each node, and then a softmax operation is applied. Graph-level\noutputs are obtained by combining an attention mechanism i and a node representation function j,\nboth implemented as neural networks, to produce the output representation\nhg = tanh (Sev o(i(h{\u201d\u201d, x,)) \u00a9 tanh(j(hW\u201d\u201d, x.)))\nx(h+) =o Gat\u201d, x\\)) .\nRecall from section|2.2|that GG-NNs represent a graph G = (V, \u20ac) asa set V of nodes v with unique\nvalues 1,...,|V| anda set \u20ac of directed edges e = (v,v\u2019) \u20ac V x YV oriented from v to v\u2019. Each\nnode has an annotation x, \u20ac R% and a hidden state h, \u20ac R?. Additionally, each edge has a type\nInitially, h;,\u2019 is set to the annotation x,, padded with zeros. Then nodes exchange information for\nsome fixed number of timesteps T' according to the propagation model\nGated Graph Sequence Neural Networks (GGS-NN) are an extension of GG-NNs to sequential\noutput o@),...,0%). At each output step k, the annotation matrix VY is given by X\u00a5\u201c) =\n\na), eT \u20ac RMI*4y_ A GG-NN F, is trained to predict an output sequence o(*) from\n\n2 (*), and another GG-NN Fx is trained to predict \u00a5*+\") from 4\"). Prediction of the output at\n\neach step is performed as in a normal GG-NN, and prediction of V*+) from the set of all final\nhidden states H\u2018:7) (after T propagation steps of Fx) occurs according to the equation\nNode addition Node state update Edge update\n\nGRU\nGRU\n\na\n\n\"| GRU-siyle\n\u2014 a\nSate : _\u2014=\n\nH\n\nSeat Sree\n=) \\ Saco = CoD\nCLL) geen cob S ge oh\nGRU |- Boa: _ heheh,\nen Seema een Sash ae cheats\nPestesteaiay cHehch lh\nPropagation Aggregation\n\nFrom node 2\n\nFrom nade 3 GRUsile"}, {"section_index": "19", "section_name": "APPENDIX B GRAPH TRANSFORMATION DETAILS", "section_text": "In this section I describe in detail the implementations of each type of differentiable graph trans\nformation|'] A diagram of the implementation of each transformation is shown in Figure |4} Not\nthat it is natural to think of these transformations as operating on a single graphical state, and eacl\nmodifying the state in place. However, in the technical descriptions of these transformations, the\noperations will be described as functions that take in an old graph and produce a new one, similar];\nto unrolling a recurrent network over time."}, {"section_index": "20", "section_name": "B.1 NODE ADDITION", "section_text": "(s\nlVo\nj4iX\n|Vg|4i> a) =\nfa\nda (a,\nh;\ni-1);\nstarting with ho initialized to some learned initial state, and recurrently computing s, and x, for\neach new node, up to some maximum number of nodes. Based on initial experiments, I found tha\nimplementing faaa as a GRU layer followed by 2 hidden tanh layers was effective, although othe:\nrecurrent networks would likely be similarly effective. The node hidden states h,, are initialized tc\nzero. The recurrence should be computed as many times as the maximum number of nodes thal\nFigure 4: Diagram of the operations performed for each class of transformation. Graph state is\nshown in the format given by Figure[I] Input and output are shown as gray boxes. Black dots\nrepresent concatenation, and + and x represent addition and multiplication, respectively. 1 \u2014 #\nrepresents taking the input value and subtracting it from 1. Note that for simplicity, operations are\nonly shown for single nodes or edges, although the operations act on all nodes and edges in parallel.\nIn particular, the propagation section focuses on information sent and received by the first node only.\nIn that section the strengths of the edges in the connectivity matrix determine what information is\nsent to each of the other nodes. Light gray connections indicate the value zero, corresponding to\nsituations where a given edge is not present.\nThe node addition transformation Jaa : T x R\u00ae \u2014 IT takes as input a graph G and an input vector\na \u20ac R\u00ae, and produces a graph G\u2019 with additional nodes. The annotation and strength of each new\nnode is determined by a function frag : R\u00b0 x R? > Rx RY x R\u00ae, where a is the length of the\ninput vector, 3 is the length of the internal state vector, and as before N is the number of node types.\nThe new nodes are then produced according to\n'The code for each transformation, and for the GGT-NN model itself, is available at https://github.\nraph-transforme\nmight be produced. The recurrent function faaq can learn to output s,, = 0 for some nodes to create\nfewer nodes, if necessary."}, {"section_index": "21", "section_name": "B.2 NODE STATE UPDATE", "section_text": "ry =o (W,[ax,] + U,;h, + b,), Zy = 0 (W.{ax,] + U-h, + bz),\nh, = tanh (W[ax,] + U(r@h,) +b), hi, =z, Oh, + (1-2) OY,\nr, =o (W,[ax,] + U;h, + b,) ,\nh, = tanh (W[ax,] + U(r@h,) +\nFor some tasks, performance can be improved by providing information to nodes of a particular typ\nonly. For instance, if the input is a sentence, and one word of that sentence directly refers to a nod\ntype (e.g., if nodes of type 1 represent Mary, and Mary appears in the sentence), it can be helpful t\nallow all nodes of type 1 to perform an update using this information. To accomplish this, Jy cat\nbe modified to take node types into account. (This modification is denoted Th direct.) Instead o\nsingle vector a \u20ac R\u00ae, the direct-reference transformation takes in A \u20ac RN*\u00b0, where A, \u20ac R\u00ae i\nthe input vector for nodes with type n. The update equations then become"}, {"section_index": "22", "section_name": "B.3. EDGE UPDATE", "section_text": "The edge update transformation Jc : T x R\u00b0 \u2014 T takes a graph G and an input vector a \u20ac R\u00b0, and\nproduces a graph G\u2019 with updated edges. For each pair of nodes (v, v\u2019), the update equations are\nThe functions feet, freset_ 1 R**?N*?2 \u2014 [0,1]\u00a5 are implemented as neural networks. (In my\nexperiments, I used a simple 2-layer fully connected network.) c,,./,y gives the level of belief ir\n(0, 1] that an edge from v to v\u2019 of type y should be created if it does not exist, and ry. gives the\nlevel of belief in [0, 1] that an edge from v to v\u2019 of type y should be removed if it does. Setting bott\nto zero results in no change for that edge, and setting both to 1 toggles the edge state."}, {"section_index": "23", "section_name": "B.4. PROPAGATION", "section_text": "The propagation transformation Tprop : I \u2014 T takes a graph G = G0) and runs a series of 7\npropagation steps (as in GG-NN), returning the resulting graph G\u2019 = G\u20187). The GG-NN propagatior\nstep is extended to handle node and edge strengths, as well as to allow more processing to occur tc\nNote that in order to use information from all of the existing nodes to produce the new nodes, the\ninput to this transformation should include information provided by an aggregation transformation\nTrepr, described in section|B.5\nThe node state update transformation J, : [ x R\u00ae \u2014 T takes as input a graph G and an input vector\na \u20ac R\u00ae, and produces a graph G\u2019 with updated node states. This is accomplished by performing a\nGRU-style update for each node, where the input is a concatenation of a and that node\u2019s annotation\nvector X, and the state is the node\u2019s hidden state, according to\nay = Avld\nry, =0(W, [ay Xv] + U;h, + b,), Zy = 0 (W.[a, x,] + Uzh, + bz),\nhi, = tanh (W[a, x,] + U(r @h,) +b), bi, =z Oh, +(1\u2014z) Ob,\nthe information transferred across edges. The full propagation equations for step \u00a2 are\nt\u20141\n= Do 50D Coyery \u00a9 SY (Kor By\u201d) + Corny \u00a9 AY\" C\nviev 1\n\ny=\nz[a\\\u201d xy] + Unt? + bz)\nrfa\u00ae x] + Uh) + b,)\n\n2) = o(W\nr) = o(W,\nni? = tanh(W[al x,] + U(r oh) + b,)\n\nb\u00ae = (1-2) onl) 42 on.\nEquation [5]has been adjusted in the most significant manner (relative to Equation|2). In particular\nSy restricts propagation so that nodes with low strength send less information to adjacent nodes\nSedge has been replaced with C to allow edges with fractional strength, and the propagation matrices\nP,,, P\u2019, have been replaced with arbitrary functions fj\", fP\u2122' : RN x RP \u2014 R\u00ae, where a is the\nlength of the vector a. I used a fully connected layer to implement each function in my experiments\nEquations|6][7| and/8]have also been modified slightly to add a bias term."}, {"section_index": "24", "section_name": "C.1 BABI TASKS", "section_text": "The knowledge graph object used during generation of the bAbI tasks is structured as a dictionary\nrelating entities to each other with specific relationship types. Entities are identified based on their\nnames, and include people (John, Mary, Sandra), locations (bedroom, kitchen, garden), objects\n(football, apple, suitcase), animals (mouse, wolf, cat), and colors (white, yellow, green), depending\non the particular task. Relationships between entities are also expressed as strings, and are directed:\nif John is holding the milk there is an \u201cis_in\u201d relationship from \u201cmilk\u201d to \u201cJohn\u201d; if Sandra is in\nthe bedroom there is an \u201cis_in\u201d relationship from \u201cSandra\u201d to \u201cbedroom\u201d; if Lily is green there is a\n\u201chas_color\u201d relationship from \u201cLily\u201d to \u201cgreen\u2019\u2019, etc.\nThe transformation from the knowledge object to a graph is straightforward: each entity used is\nassigned to a new node type, and relationships between entities are represented as edges between\nthe corresponding nodes. To avoid confusion from overloaded relationships (such as \u201cis_in\u201d being\nused to represent an object being held by a person as well as a person being in a room), relation\nnames are given a distinct edge type depending on the usage context. For instance, when a person is\ncarrying an object, the generic \u201c\u2018is_in\u201d relationship becomes an edge of type \u201cgettable_is_in_actor\u2019\u2019.\nSome of the graph representations had to be modified in order to ensure that they contained all o\nthe necessary information. For instance, task 3 requires the network to remember where items wer\nin the past, but the knowledge object only contained references to their current locations. In thes\ncases, a linked list structure was added to the knowledge object to allow the history information t\nbe represented in the graph.\nIn particular, each time an item changed locations, a new \u201crecord\u201d node was added, with a \u201cprevious\u201d\nedge to the previous history node and a \u201cvalue\u201d edge to the current location of the item. Each item\nthen connected to the most recent history node using a \u201chistory-head\u201d edge. This ensures that the\nhistory of each node is present in the graph.\nThe aggregation transformation Tyepr : 1 + R\u00ae produces a graph-level representation vector from a\ngraph. It functions very similarly to the output representation of a GG-NN (equation|3), combining\nan attention mechanism with a node representation function, but is modified slightly to take into\naccount node strengths. As in GG-NN, both i and 7 are neural networks, and in practice a single\nfully connected layer appears to be adequate for both.\nhg = te\ng = tanh (Nvev s,o(i(hy\u201d, x,)) \u00a9 tanh(j(hy\"),x,))) .\n|. John grabbed the milk.\n\n2. John travelled to the bedroom.\n3. Sandra took the football.\n\n4. John went to the garden.\n\n5.\n5\n7\n3\n\nJohn let go of the milk.\n\n. Sandra let go of the football.\n. John got the football.\n. John grabbed the milk.\n\nWhere is the milk?\nFigure 5: Diagram of one sample story from the bAbI dataset (Task 2), along with a graphica\nrepresentation of the knowledge state after the italicized sentence.\nFigure 6: Diagram of one example from the automaton task, along with a graphical representation\nof the automaton state after the fourth simulate command (italicized).\nFigure 7: Diagram of an example from the Turing machine task, with a graphical representation of\nthe machine state after the second run command (italicized).\nJohn\n\nGarden e\nBedroom\nttable_is_in_location is_in_actor\nFootball\n\nMilk\n\nSandra\nValue edges\n\nNew cells (right)\n16.\n17.\n18.\n19,\n\nQ. input symbol_0 head\n. input symbol_0\n\n12.\n13.\n14.\n\ninput symbol_0 States and rules\ninput symbol_1\n\na Current state\n\nrun\n.run Head @ Current cell\nrun Cells\nrun\nrun\n\nrun Zero One\ntates and rules\n\na Current sta\n\nHead @) Current ce\n\nCells\n\nZero One\nAn example of a graph produced from the bADI tasks is given in Figure|5\nThe cellular automaton task was mapped to graphical format as follows: Nodes have 5 types: zero\none, init-cell, left-cell, and right-cell. Edges have 2 types: value, and next-r. There is always exactl;\none \u201czero\u201d node and one \u201cone\u201d node, and all of the cell nodes form a linked list, with a \u201cvalue\u201d edg\nconnecting to either zero or one, and a \u201cnext-r\u2019 edge pointing to the next cell to the right (or no edg\nfor the rightmost cell).\nAn example of the graphical format for the cellular automaton task is given in Figure [6]\nFor the Turing machine task, nodes were assigned to 8 types: state-A, state-B, state-C, state-D\nhead, cell, 0, and 1. Edges have 16 types: head-cell, next-left, head-state, value, and 12 types o\nthe form rule-R-W-D, where R is the symbol read (0 or 1), W is the symbol written (0 or 1), anc\nD is the direction to move afterward (Left, Right, or None). State nodes are connected with ruk\nedges, which together specify the rules governing the Turing machine. Cell nodes are connected t\nadjacent cells with next-left edges, and to the symbol on the tape with value edges. Finally, the hea\nnode is connected to the current state with a head-state edge, and to the current cell of the head witl\na head-cell edge.\nAt the start of each training example, each of the rules for the Turing machine are given, in the\nform \u201crule state-X R W state-Y D\u201d. Next, the initial state is given in the format \u201cstart state-X\u201d, and\nthe initial contents of the tape (of length 4) are given sequentially in the format \u201cinput symbol-X\u201d.\nwith the position for the head to start marked by \u201cinput symbol-X head\u201d. Finally, there are 6 \u201crun\u201d\ninputs, after each of which the head node updates its edges and the cell at the head updates its value\naccording to the rules of the Turing machine. If the head leaves the left or right of the tape, a new\nnode is introduced there.\nAn example of the graphical format for the Turing machine task is given in Figure/7"}, {"section_index": "25", "section_name": "APPENDIX D GRAPH SEQUENCE INPUT", "section_text": "The model described in Section |4|conditions the output of the model on the final graph producec\noy the network. This is ideal when the graph represents all of the necessary knowledge for solving\nhe task. However, it may also be desirable for each graph to represent a subset of knowledge corre.\nsponding to a particular time, and for the output to be based on the sequence of graphs produced. Fo:\nnstance, in the third bAbI task (which requires reasoning about the temporal sequence of events\n2ach graph could represent the state of the word at that particular time, instead of representing the\null sequence of events prior to that time. In Appendix |C] section|C.1} I describe a transformatiot\n\u2018o the tasks which allows all information to be contained in the graph. But this adds complexity tc\nhe graphical structure. If it were possible for the model to take into account the full sequence o:\nsraphs, instead of just the final one, we could maintain the simplicity of the graph transformation.\nTo this end, I present an extension of the GGT-NN model that can produce output using the full\ngraphical sequence. In the extended model, the graphical output of the network after each input\nsentence is saved for later use. Then, when processing the query, the same set of query transfor-\nmations are applied to every intermediate graph, producing a sequence of representation vectors\n\nanswer answer ined j F answer\nhaswer haswer These are then combined into a final summary representation vector h@sw |\nIn a few of the tasks, specific entities had multi-word representations. While this works for normal\ninput, it makes it difficult to do direct reference, since direct reference is checked on an individual\nword level. These tasks were modified slightly so that the entities are referred to with single words\n(e.g. \u201cred_square\u201d instead of \u201c\u2018red square\u2019\u201d\u2019).\nAt the start of each training example, there are 13 timesteps with input of the form \u201cinit X\u201d where X\nis 0 or 1. These timesteps indicate the first 13 initial cells. Afterward, there are 7 \u201csimulate\u201d inputs.\nAt each of these timesteps, one new left-cell node is added on the left, one new right-cell node is\nadded on the right, and then all cells update their value according to the Rule 30 update rules.\nDirect reference | No direct reference\nTask Accuracy Accuracy\n3 - Three Supporting Facts 90.3% 65.4%\n5 - Three Arg. Relations 89.8% 74.2%\nTable 4: Performance of the sequence-extended GGT-NN on the two bAbI tasks with a temporal\ncomponent.\nAlgorithm 2 Sequence-Extended Pseudocode\nPRA NE ARAL ae Oe ee eee eee\n\nGoi @ > Initialize G to an empty graph\nfor k from 1 to K do > Process each sentence\nGr = TalGr\u20141,i)\nif direct reference enabled then\nGe \u2014 Tet\u00ae(G,, D)\nend if\nif intermediate propagation enabled then\nGr < Tprop (Gr)\nend if\nhi! & Trpe(Ge)\nGrr \u2014 Tada (Gr, fi\u201d hg)\nGr \u2014 Te(Ge,i*)\n\nend for\nhiimmary < 0 > Initialize hgimmary to the zero vector\nfor k from | to K do > Process the query for each graph\n\nGe Te (Gui)\nif direct reference enabled then\nGee Tenens (Gu, D\u2122*\u201d)\n\nend if\n\nGr < Tptop\u201d (Gr)\n\nhg \u2014 Teer\u201d (Ge)\n\nHistmmary fsummarize (ge, astmmary\nend for\nreturn foutpur (bac)\nusing a recurrent network such as a GRU layer, from which the output can be produced. The modi-\nfied pseudocode for this is shown in Algorithm[2]\nI evaluated the extended model on bADI tasks 3 and 5, the two tasks which asked questions about <\nsequence of events. (Note that although Task 14 also involves a sequence of events, it uses a set o!\ndiscrete named time periods and so is not applicable to this modification.) The model was trainec\non each of these tasks, without the extra record and history nodes used to store the sequence, insteac\nsimply using the sequence of graphs to encode the relevant information. Due to the simpler graphs\nproduced, intermediate propagation was also disabled.\nResults from training the model are shown in Table[4] The accuracy of the extended model appears\nto be slightly inferior to the original model in general, although the extended direct-reference model\nof task 5 performs slightly better than its original counterpart. One possible explanation for the\ninferiority of the extended model is that the increased amount of query processing made the model\nmore likely to overfit on the training data. Even so, the extended model shows promise, and could be\nadvantageous for modeling complex tasks for which preprocessing the graph would be impractical."}]
S1Bb3D5gg
[{"section_index": "0", "section_name": "LEARNING END-TO-END GOAL-ORIENTED DIALOG", "section_text": "Antoine Bordes, Y-Lan Boureau & Jason Weston\nTraditional dialog systems used in goal-oriented applications require a lot of\ndomain-specific handcrafting, which hinders scaling up to new domains. End\nto-end dialog systems, in which all components are trained from the dialogs\nthemselves, escape this limitation. But the encouraging success recently obtained in\nchit-chat dialog may not carry over to goal-oriented settings. This paper proposes a\ntestbed to break down the strengths and shortcomings of end-to-end dialog systems\nin goal-oriented applications. Set in the context of restaurant reservation, out\ntasks require manipulating sentences and symbols in order to properly conduct\nconversations, issue API calls and use the outputs of such calls. We show that an\nend-to-end dialog system based on Memory Networks can reach promising, yet\nimperfect, performance and learn to perform non-trivial operations. We confirm\nthose results by comparing our system to a hand-crafted slot-filling baseline on\n\ndata from the second Dialog State Tracking Challenge (Henderson et al.||2014a)\n\nWe show similar result patterns on data extracted from an online concierge service"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The most useful applications of dialog systems such as digital personal assistants or bots are currently\ngoal-oriented and transactional: the system needs to understand a user request and complete a relatec\ntask with a clear goal within a limited number of dialog turns. The workhorse of traditional dialog\nsystems is slot-filling (Lemon et al.| 2006} Wang and Lemon} |2013} /Young er al. 2013) which\npredefines the structure of a dialog state as a set of slots to be filled during the dialog. For a restauran\nreservation system, such slots can be the location, price range or type of cuisine of a restaurant\nSlot-filling has proven reliable but is inherently hard to scale to new domains: it is impossible tc\nmanually encode all features and slots that users might refer to in a conversation.\nEnd-to-end dialog systems, usually based on neural networks (Shang et al.| [2015} Vinyals and\nall their components are directly trained on past dialogs, with no assumption on the domain ot\ndialog state structure, thus making it easy to automatically scale up to new domains. They have\nshown promising performance in non goal-oriented chit-chat settings, where they were trained\nto predict the next utterance in social media and forum threads (Ritter et al.| 201 I Wang et al.\nor movie conversations ( But the performance achieved on\nchit-chat may not necessarily carry over to goal-oriented conversations. As illustrated in Figure]\nin a restaurant reservation scenario, conducting goal-oriented dialog requires skills that go beyon\nlanguage modeling, e.g., asking questions to clearly define a user request, querying Knowledge Bases\n(KBs), interpreting results from queries to display options to users or completing a transaction. This\nmakes it hard to ascertain how well end-to-end dialog models would do, especially since evaluating\nchit-chat performance in itself is not straightforward (Liu er aZ.|[2016). In particular, it is unclear if\nend-to-end models are in a position to replace traditional dialog methods in a goal-directed setting\ncan end-to-end dialog models be competitive with traditional methods even in the well-defined\nnarrow-domain tasks where they excel? If not, where do they fall short?\nThis paper aims to make it easier to address these questions by proposing an open resource to test end-\nto-end dialog systems in a way that 1) favors reproducibility and comparisons, and 2) is lightweight\nand easy to use. We aim to break down a goal-directed objective into several subtasks to test some\ncrucial capabilities that dialog systems should have (and hence provide error analysis by design)."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In the spirit of the bADI tasks conceived as question answering testbeds (Weston ef al.||2015b), w\n\ndesigned a set of five tasks within the goal-oriented context of restaurant reservation. Grounde\nwith an underlying KB of restaurants and their properties (location, type of cuisine, etc.), these task\ncover several dialog stages and test if models can learn various abilities such as performing dialo\nmanagement, querying KBs, interpreting the output of such queries to continue the conversation \u00a9\ndealing with new entities not appearing in dialogs from the training set. In addition to showing hov\nthe set of tasks we propose can be used to test the goal-directed capabilities of an end-to-end dialo\nsystem, we also propose results on two additional datasets extracted from real interactions with user\nto confirm that the pattern of results observed in our tasks is indeed a good proxy for what would b\nobserved on real data, with the added benefit of better reproducibility and interpretability.\nThe goal here is explicitly not to improve the state of the art in the narrow domain of restaurant\nbooking, but to take a narrow domain where traditional handcrafted dialog systems are known to\nperform well, and use that to gauge the strengths and weaknesses of current end-to-end systems\nwith no domain knowledge. Solving our tasks requires manipulating both natural language and\nsymbols from a KB. Evaluation uses two metrics, per-response and per-dialog accuracies, the latter\ntracking completion of the actual goal. Figure[I] depicts the tasks and Section[3|details them. Section\ncompares multiple methods on these tasks. As an end-to-end neural model, we tested Memory\nNetworks (Weston et al.| , an attention-based architecture that has proven competitive fot\nnon goal- oriented (Reston <a] (Dodse er al (Dodge et al.| . Our experiments in Section[5]show that Memory\nNetworks can be trained to perform non-trivial operations such as issuing API calls to KBs and\nmanipulating entities unseen in training. We confirm our findings on real human-machine dialogs\nThe_Place\n\u2018The Place\n\u2018The Place\n\u2018The Place\n\u2018The Place\n\u2018The Place\n\u2018The Place\nThe_Fancy_Pub\nThe_Fancy Pub\n\u2018The_Fancy_Pub\n\u2018The_Fancy Pub\n\u2018The_Fancy Pub\n\u2018The _Fancy Pub\nThe Fancy Pub\n\ni\n\nHN\"\n\n\u2018What do you think ofthis option: The Fancy. Pub\n\u2018aso yon i of opt Te Pace\n\nee Te Peaks,\n{Is there anything ese I can hep you with?\n\u201cowe welcome.\n\nTask 5 Conducting full dialogs\n\nTask 1\nIssuing API calls\n\nTask 2\nUpdating API calls\n\nR_phone The Place phone\nR cuisine british\nR_address The Place_address\nR_location london\nRavailability four\n\nR price expensive\n\nRLrating 7\n\nR_phone The_Fancy Pub_phone\nR cuisine british\n\nR address The_Fancy Pub_address\nR_location london\nR availability four\nR price expensive\nRLrating 8\n\nTask 3\nDisplaying options\n\nTask 4\nProviding extra-information\nFigure 1: Goal-oriented dialog tasks. A user (in green) chats with a bot (in blue) to book a table at\na restaurant. Models must predict bot utterances and API calls (in dark red). Task 1 tests the capacity of\ninterpreting a request and asking the right questions to issue an API call. Task 2 checks the ability to modify\nan API call. Task 3 and 4 test the capacity of using outputs from an API call (in light red) to propose options\n(sorted by rating) and to provide extra-information. Task 5 combines everything.\nit to 14 Is Concierge\nNumber of utterances: 12 17 43 #15 55 54 8\nDIALOGS - user utterances 5 7 7 4 13 6 4\nAverage statistics - bot utterances 7 10 10 4 18 8 4\n- outputs from API calls | 0 0 23 7 24 40 0\nVocabulary size 3,747 1,229 8,629\nCandidate set size 4,212 2,406 11,482\nDATASETS Training dialogs 1,000 1,618 3,249\nTasks 1-5 share the | Validation dialogs 1,000 500 403\nsame data source Test dialogs 1,000\u2018) 1,117 402\n- user utterances\n- bot utterances\n- outputs from API calls\nfrom the restaurant reservation dataset of the 2\u201d\u00a2 Dialog State Tracking Challenge, or DSTC2\n(Henderson et al.|{2014a), which we converted into our task format, showing that Memory Networks\ncan outperform a dedicated slot-filling rule-based baseline. We also evaluate on a dataset of human:\nhuman dialogs extracted from an online concierge service that books restaurants for users. Overall\nthe per-response performance is encouraging, but the per-dialog one remains low, indicating that\n\nend-to-end models still need to improve before being able to reliably handle goal-oriented dialog.\nThe most successful goal-oriented dialog systems model conversation as partially observable Markov\ndecision processes (POMDP) However, despite recent efforts to learn modules\n(Henderson ef al.|/2014b), they still require many hand-crafted features for the state and action space\nrepresentations, which restrict their usage to narrow domains. Our simulation, used to generate\ngoal-oriented datasets, can be seen as an equivalent of the user simulators used to train POMDP\n\n(Young et al.|/2013}{Pietquin and Hastie}{2013), but for training end-to-end systems.\nSerban et al. list available corpora for training dialog systems. Unfortunately, no good\nresources exist in and test end-to-end models in goal-oriented scenarios. Goal-oriented datasets\nare usually designed to train or test dialog state tracker components 2\nare hence of limited scale and not suitable for end-to-end learning (annotated at the state level and\nnoisy). However, we do convert the Dialog State Tracking Challenge data into our framework. Some\ndatasets are not open source, and require a particular license agreement or the participation to a\nchallenge (e.g., the end-to-end task of DSTC4 (Kim et al.|[2016)) or are proprietary (e.g.,{Chen et al\n)). Datasets are often based on interactions between users and existing systems (or ensemble of\nsystems) like DSTC datasets, SFCore or ATIS (Dahl et al.) {1994p. This creates\nnoise and makes it harder to interpret the errors of a model. Lastly, resources designed to connect\ndialog systems to users, in particular in the context of reinforcement learning, are usually built around\na crowdsourcing setting such as Amazon Mechanical Turk, e.g., (Hixon et al. 2015} Wen et al.\n2015} (2015afb). While this has clear advantages, it prevents reproducibility and consistent\ncomparisons of methods in the exact same setting.\nThe closest resource to ours might be the set of tasks described in (Dodge et a\nthem can be seen as goal-oriented. However, those are question answering tasks ather than dialog,\ni.e. the bot only responds with answers, never questions, which does not reflect full conversation."}, {"section_index": "3", "section_name": "3. GOAL-ORIENTED DIALOG TASKS", "section_text": "All our tasks involve a restaurant reservation system, where the goal is to book a table at a restaurant\nThe first five tasks are generated by a simulation, the last one uses real human-bot dialogs. The data\n\nfor all tasks is available at/http://fb.ai/babi) We also give results on a proprietary dataset\n\nextracted from an online restaurant reservation concierge service with anonymized users.\nTable 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB.\nTask 6 was converted from the 2\u201d\u00a2 Dialog State Tracking Challenge Concierge is\nmade of chats extracted from a real online concierge service. \u201c\u00a9 Tasks 1-5 have two test sets, one using the\nvocabulary of the training set and the other using out-of-vocabulary words."}, {"section_index": "4", "section_name": "3.1 RESTAURANT RESERVATION SIMULATION", "section_text": "The simulation is based on an underlying KB, whose facts contain the restaurants that can be booke\nand their properties. Each restaurant is defined by a type of cuisine (10 choices, e.g., French, Thai),\nlocation (10 choices, e.g., London, Tokyo), a price range (cheap, moderate or expensive) and a ratin\n(from | to 8). For simplicity, we assume that each restaurant only has availability for a single part\nsize (2, 4, 6 or 8 people). Each restaurant also has an address and a phone number listed in the KB\nThe KB can be queried using API calls, which return the list of facts related to the corresponding\nrestaurants. Each query must contain four fields: a location, a type of cuisine, a price range and a\nparty size. It can return facts concerning one, several or no restaurant (depending on the party size).\nUsing the KB, conversations are generated in the format shown in Figure[I] Each example is a dialogs\ncomprising utterances from a user and a bot, as well as API calls and the resulting facts. Dialogs are\ngenerated after creating a user request by sampling an entry for each of the four required fields: e.g\nthe request in Figure[l]is [cuisine: British, location: London, party size: six, price range: expensive]\nWe use natural language patterns to create user and bot utterances. There are 43 patterns for the use:\nand 20 for the bot (the user can use up to 4 ways to say something, while the bot always uses the\nsame). Those patterns are combined with the KB entities to form thousands of different utterances."}, {"section_index": "5", "section_name": "3.1.1 TASK DEFINITIONS", "section_text": "Task 1: Issuing API calls A user request implicitly defines a query that can contain from 0 to 4 o!\nthe required fields (sampled uniformly; in Figure[I} it contains 3). The bot must ask questions fo:\nfilling the missing fields and eventually generate the correct corresponding API call. The bot asks fo:\ninformation in a deterministic order, making prediction possible.\nTask 2: Updating API calls Starting by issuing an API call as in Task 1, users then ask to update\ntheir requests between 1| and 4 times (sampled uniformly). The order in which fields are updated is\nrandom. The bot must ask users if they are done with their updates and issue the updated API call.\nTask 3: Displaying options Given a user request, we query the KB using the corresponding API\ncall and add the facts resulting from the call to the dialog history. The bot must propose options to\nusers by listing the restaurant names sorted by their corresponding rating (from higher to lower) until\nusers accept. For each option, users have a 25% chance of accepting. If they do, the bot must stop\ndisplaying options, otherwise propose the next one. Users always accept the option if this is the last\nremaining one. We only keep examples with API calls retrieving at least 3 options.\nTask 4: Providing extra information Given a user request, we sample a restaurant and start th\ndialog as if users had agreed to book a table there. We add all KB facts corresponding to it to th\ndialog. Users then ask for the phone number of the restaurant, its address or both, with proportion:\n25%, 25% and 50% respectively. The bot must learn to use the KB facts correctly to answer.\nTask 5: Conducting full dialogs We combine Tasks 1-4 to generate full dialogs just as in Figure[]\nUnlike in Task 3, we keep examples if API calls return at least 1 option instead of 3."}, {"section_index": "6", "section_name": "3.1.2 DATASETS", "section_text": "We want to test how well models handle entities appearing in the KB but not in the dialog training\nsets. We split types of cuisine and locations in half, and create two KBs, one with all facts about\nrestaurants within the first halves and one with the rest. This yields two KBs of 4,200 facts and 600\nrestaurants each (5 types of cuisine x 5 locations x 3 price ranges x 8 ratings) that only share price\nranges, ratings and party sizes, but have disjoint sets of restaurants, locations, types of cuisine, phones\nand addresses. We use one of the KBs to generate the standard training, validation and test dialogs\nand use the other KB only to generate test dialogs, termed Out-Of-Vocabulary (OOV) test sets.\nFor training, systems have access to the training examples and both KBs. We then evaluate on both\ntest sets, plain and OOV. Beyond the intrinsic difficulty of each task, the challenge on the OOV test\nWe now detail each task. Tasks 1 and 2 test dialog management to see if end-to-end systems can learn\nto implicitly track dialog state (never given explicitly), whereas Task 3 and 4 check if they can learn\nto use KB facts in a dialog setting. Task 3 also requires to learn to sort. Task 5 combines all tasks.\nsets is for models to generalize to new entities (restaurants, locations and cuisine types) unseen in any\ntraining dialog \u2014 something natively impossible for embedding methods. Ideally, models could, for\ninstance. leverage information coming from the entities of the same type seen during training.\nWe generate five datasets, one per task defined in3.1.1] Table[I]gives their statistics. Training sets are\nrelatively small (1,000 examples) to create realistic learning conditions. The dialogs from the trainin;\nand test sets are different, never being based on the same user requests. Thus, we test if models cat\ngeneralize to new combinations of fields. Dialog systems are evaluated in a ranking, not a generatior\nsetting: at each turn of the dialog, we test whether they can predict bot utterances and API calls by\nselecting a candidate, not by generating it|\"| Candidates are ranked from a set of all bot utterances anc\nAPI calls appearing in training, validation and test sets (plain and OOV) for all tasks combined."}, {"section_index": "7", "section_name": "3.2 DIALOG STATE TRACKING CHALLENGE", "section_text": "Since our tasks rely on synthetically generated language for the user, we supplement our dataset\nwith real human-bot dialogs. We use data from DSTC2 (Henderson et al.|{2014a), that is also in the\nrestaurant booking domain. Unlike our tasks, its user requests only require 3 fields: type of cuisine\n(91 choices), location (5 choices) and price range (3 choices). The dataset was originally designed\nfor dialog state tracking hence every dialog turn is labeled with a state (a user intent + slots) to be\npredicted. As our goal is to evaluate end-to-end training, we did not use that, but instead converted\nthe data into the format of our 5 tasks and included it in the dataset as Task 6.\nWe used the provided speech transcriptions to create the user and bot utterances, and given the dialog\nstates we created the API calls to the KB and their outputs which we added to the dialogs. We also\nadded ratings to the restaurants returned by the API calls, so that the options proposed by the bots\ncan be consistently predicted (by using the highest rating). We did use the original test set but use\na slightly different training/validation split. Our evaluation differs from the challenge (we do not\npredict the dialog state), so we cannot compare with the results from (Henderson et al.|/2014a)."}, {"section_index": "8", "section_name": "3.3. ONLINE CONCIERGE SERVICE", "section_text": "Tasks 1-6 are, at least partially, artificial. This provides perfect control over their design (at least\nfor Tasks 1-5), but no guarantee that good performance would carry over from such synthetic to\nmore realistic conditions. To quantify this, we also evaluate the models from Section [4] on data\nextracted from a real online concierge service performing restaurant booking: users make requests\nthrough a text-based chat interface that are handled by human operators who can make API calls. All\nconversations are between native English speakers.\nWe collected around 4k chats to create this extra dataset, denoted Concierge. All conversations have\nbeen anonymized by (1) removing all user identifiers, (2) using the Stanford NER tagger to remove\nnamed entities (locations, timestamps, etc.), (3) running some manually defined regex to filter out\nany remaining salient information (phone numbers, etc.). The dataset does not contain results from\nAPI calls, but still records when operators made use of an external service (Yelp or OpenTable) to\ngather information. Hence, these have to be predicted, but without any argument (unlike in Task 2).\nThe statistics of Concierge are given in Table[]] The dialogs are shorter than in Tasks 1-6, especiall:\nsince they do not include results of API calls, but the vocabulary is more diverse and so is the candidat\nset; the candidate set is made of all utterances of the operator appearing in the training, validatio\nand test sets. Beyond the higher variability of the language used by human operators compared t\nbots, the dataset offers additional challenges. The set of user requests is much wider, ranging fron\nmanaging restaurant reservations to asking for recommendations or specific information. Users d\nnot always stay focused on the request. API calls are not always used (e.g., the operator might us.\nneither Yelp nor OpenTable to find a restaurant), and facts about restaurants are not structured no\nconstrained as in a KB. The structure of dialogs is thus much more variable. Users and operators als\nmake typos, spelling and grammar mistakes.\nLowe et al. 2016) termed this setting Next-Utterance-Classification.\nThis dataset has similar statistics to our Task 5 (see Table|1) but is harder. The dialogs are noisier and\nthe bots made mistakes due to speech recognition errors or misinterpretations and also do not always\nhave a deterministic behavior (the order in which they can ask for information varies)."}, {"section_index": "9", "section_name": "4 MODELS", "section_text": "To demonstrate how to use the dataset and provide baselines, we evaluate several learning methods on\nour goal-oriented dialog tasks: rule-based systems, classical information retrieval methods, supervised\nembeddings, and end-to-end Memory networks."}, {"section_index": "10", "section_name": "4.1 RULE-BASED SYSTEMS", "section_text": "Our tasks T1-TS5 are built with a simulator so as to be completely predictable. Thus it is possible\nto hand-code a rule based system that achieves 100% on them, similar to the bAbI tasks of Weston\net al.|(2015b). Indeed, the point of these tasks is not to check whether a human is smart enough to be\nable to build a rule-based system to solve them, but to help analyze in which circumstances machine\nlearning algorithms are smart enough to work, and where they fail.\nHowever, the Dialog State Tracking Challenge task (T6) contains some real interactions with user:\nThis makes rule-based systems less straightforward and not so accurate (which is where we expec\nmachine learning to be useful). We implemented a rule-based system for this task in the followin;\nway. We initialized a dialog state using the 3 relevant slots for this task: cuisine type, location anc\nprice range. Then we analyzed the training data and wrote a series of rules that fire for triggers lik\nword matches, positions in the dialog, entity detections or dialog state, to output particular responses\nAPI calls and/or update a dialog state. Responses are created by combining patterns extracted fron\nthe training set with entities detected in the previous turns or stored in the dialog state. Overall wi\nbuilt 28 rules and extracted 21 patterns. We optimized the choice of rules and their application priorit\n(when needed) using the validation set, reaching a validation per-response accuracy of 40.7%. We\ndid not build a rule-based system for Concierge data as it is even less constrained."}, {"section_index": "11", "section_name": "4.2 CLASSICAL INFORMATION RETRIEVAL MODELS", "section_text": "TF-IDF Match For each possible candidate response, we compute a matching score between the\ninput and the response, and rank the responses by score. The score is the TF-IDF weighted cosine\nsimilarity between the bag-of-words of the input and bag-of-words of the candidate response. We\nconsider the case of the input being either only the last utterance or the entire conversation history\nand choose the variant that works best on the validation set (typically the latter).\nNearest Neighbor Using the input, we find the most similar conversation in the training set, anc\noutput the response from that example. In this case we consider the input to only be the last utterance\nand consider the training set as (utterance, response) pairs that we select from. We use word overlar\nas the scoring method. When several responses are associated with the same utterance in training, we\nsort them by decreasing co-occurence frequency."}, {"section_index": "12", "section_name": "4.3 SUPERVISED EMBEDDING MODELS", "section_text": "The embeddings are trained with a margin ranking loss: f(x,y) > m+ f(x,y), with m the size\nof the margin, and we sample N negative candidate responses y per example, and train with SGD\nThis approach has been previously shown to be very effective in a range of contexts (Bai et al.|{2009\nA standard, often strong, baseline is to use supervised word embedding models for scoring (conversa\ntion history, response) pairs. The embedding vectors are trained directly for this goal. In contrast\nword embeddings are most well-known in the context of unsupervised training on raw text as in\nword2vec (Mikolov et al.||2013). Such models are trained by learning to predict the middle word\ngiven the surrounding window of words, or vice-versa. However, given training data consisting of\ndialogs, a much more direct and strongly performing training procedure can be used: predict the next\nresponse given the previous conversation. In this setting a candidate reponse y is scored against the\ninput 2: f(x,y) = (Aa) ' By, where A and B are d x V word embedding matrices, i.e. input and\nresponse are treated as summed bags-of-embeddings. We also consider the case of enforcing A = B\nwhich sometimes works better, and optimize the choice on the validation set.\nMemory Networks (Weston eg al.||2015a}/Sukhbaatar er al.|[2015) are a recent class of models tha\nhave been applied to a range of natural language processing tasks, including question answerin;\n(Weston ef al.|/2015b), language modeling (Sukhbaatar er al.|[2015), and non-goal-oriented dialo;\n(Dodge et al.||2016). By first writing and then iteratively reading from a memory component (usin;\nhops) that can store historical dialogs and short-term context to reason about the required response\nthey have been shown to perform well on those tasks and to outperform some other end-to-enc\narchitectures based on Recurrent Neural Networks. Hence, we chose them as end-to-end mode\nbaseline.\nWe use the MemN2N architecture of [Sukhbaatar er al.|(2015), with an additional modification to\nleverage exact matches and types, described shortly. Apart from that addition, the main components\nof the model are (i) how it stores the conversation in memory, (ii) how it reads from the memory to\nreason about the response; and (iii) how it outputs the response. The details are given in Appendix[A|\nWords denoting entities have two important traits: 1) exact matches are usually more appropriate tc\ndeal with them than approximate matches, and 2) they frequently appear as OOV words (e.g., the\nname of a new restaurant). Both are a challenge for embedding-based methods. Firstly, embedding\ninto a low dimensional space makes it hard to differentiate between exact word matches, and matche:\nbetween words with similar meaning ). While this can be a virtue (e.g. when using\nsynonyms), it is often a flaw when dealing with entities (e.g. failure to differentiate between phone\nnumbers since they have similar embeddings). Secondly, when a new word is used (e.g. the name o:\na new restaurant) not seen before in training, no word embedding is available, typically resulting ir\nBoth problems can be alleviated with match type features. Specifically, we augment the vocabulary\nwith 7 special words, one for each of the KB entity types (cuisine type, location, price range, part\nsize, rating, phone number and address). For each type, the corresponding type word is added t\nthe candidate representation if a word is found that appears 1) as a KB entity of that type, 2) in th\ncandidate, and 3) in the input or memory. Any word that matches as a KB entity can be typed evet\nif it has never been seen before in training dialogs. These features allow the model to learn to rel;\non type information using exact matching words cues when OOV entity embeddings are not knowr\nas long as it has access to a KB with the OOV entities. We assess the impact of such features fo\nTF-IDF Match, Supervised Embeddings and Memory Networks."}, {"section_index": "13", "section_name": "5 EXPERIMENTS", "section_text": "Our main results across all the models and tasks are given in Table[2] (extra results are also given ir\nTable[I0of Appendix[D}. The first 5 rows show tasks T1-T5, and rows 6-10 show the same tasks ir\nthe out-of-vocabulary setting. Rows 11 and 12 give results for the Dialog State Tracking Challenge\ntask (T6) and Concierge respectively. Columns 2-7 give the results of each method tried in terms o!\nper-response accuracy and per-dialog accuracy, the latter given in parenthesis. Per-response accuracy\ncounts the percentage of responses that are correct (i.e., the correct candidate is chosen out of al\npossible candidates). Per-dialog accuracy counts the percentage of dialogs where every response is\ncorrect. Ultimately, if only one response is incorrect this could result in a failed dialog, i.e. failure tc\nachieve the goal (in this case, of achieving a restaurant booking). Note that we test Memory Network:\n(MemNNs) with and without match type features, the results are shown in the last two columns. The\nhyperparameters for all models were optimized on the validation sets; values for best performing\nmodels are given in Appendix|C]\nThe classical IR method TF-IDF Match performs the worst of all methods, and much worse than the\nNearest Neighbor IR method, which is true on both the simulated tasks T1-T5 and on the real dat:\nof T6 and Concierge. Supplementing TF-IDF Match with match type features noticeably improve:\nperformance, which however still remains far behind Nearest Neighbor IR (adding bigrams to th\nTable 2: Test results across all tasks and methods. For tasks T1-T5 results are given in the standar\nsetup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been see:\nduring training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Bes\n\nperforming methods (or methods within 0.1% of best performing) are given in bold\n\nfor the per-response accurac\n\nmetric, with the per-dialog accuracy given in parenthesis. \u2018*) For Concierge, an example is considered correctl\nanswered if the correct response is ranked among the top 10 candidates by the bot, to accommodate the muc!\n\nlarger range of semantically equivalent responses among candidates (see ex. in Tab.\n\n[7h . YD We did not implemer\n\nMemNNs+match type on Concierge, because this method requires a KB and there is none associated with it.\ndictionary has no effect on performance). This is in sharp contrast to other recent results on date\ndriven non-goal directed conversations, e.g. over dialogs on Twitter (Ritter et al. or Reddi\n(Dodge et al.\\|2016), where it was found that TF-IDF Match outperforms Nearest Neighbor, as genera\n\nconversations on a given subject typically share many words. We conjecture that the goal-oriente:\nnature of the conversation means that the conversation moves forward more quickly, sharing fewe\nwords ner Gnnut. response) nair es consider the example in Ficurell]\nSupervised embeddings outperform classical IR methods in general, indicating that learning mapping:\nbetween words (via word embeddings) is important. However, only one task (T1, Issuing API calls\nis completely successful. In the other tasks, some responses are correct, as shown by the per-respons:\naccuracy, however there is no dialog where the goal is actually achieved (i.e., the mean dialog\naccuracy is 0). Typically the model can provide correct responses for greeting messages, askin;\nto wait, making API calls and asking if there are any other options necessary. However, it fails t\ninterpret the results of API calls to display options, provide information or update the calls with nev\ninformation, resulting in most of its errors, even when match type features are provided.\nMemory Networks (without match type features) outperform classical IR and supervised embedding:\nacross all of the tasks. They can solve the first two tasks (issuing and updating API calls) adequatel;\nOn the other tasks, they give improved results, but do not solve them. While the per-response accurac\u2019\nis improved, the per-dialog accuracy is still close to 0 on T3 and T4. Some examples of prediction:\nof the MemNN for T1-4 are given in Appendix |B] On the OOV tasks again performance is improvec\nbut this is all due to better performance on known words, as unknown words are simply not usec\nwithout the match type features. As stated in Appendix|C} optimal hyperparameters on several of the\ntasks involve 3 or 4 hops, indicating that iterative accessing and reasoning over the conversation help:\ne.g. on T3 using | hop gives 64.8% while 2 hops yields 74.7%. Appendix [B] displays illustrative\nexamples of Memory Networks predictions on T 1-4 and Concierge.\nMemory Networks with match type features give two performance gains over the same models\nwithout match type features: (1) T4 (providing information) becomes solvable because matches can\nbe made to the results of the API call; and (ii) out-of-vocabulary results are significantly improved\nas well. Still, tasks T3 and TS are still fail cases, performance drops slightly on T2 compared to\nnot using match type features, and no relative improvement is observed on T6. Finally, note that\nmatching words on its own is not enough, as evidenced by the poor performance of TF-IDF matching\nthis idea must be combined with types and the other properties of the MemNN model.\nUnsurprisingly, perfectly coded rule-based systems can solve the simulated tasks T1-T5 perfectly,\nwhereas our machine learning methods cannot. However, it is not easy to build an effective rule-based\nTask Rule-based TF-IDF Match Nearest Supervised Memory Networks\nSystems notype _+type Neighbor Embeddings no match type _ + match type\nTI: Issuing API calls 100 (100) | 5.6 (0) = 22.4(0) 55.1 (0) 100 (100) 99.9 (99.6) 100 (100)\nT2: Updating API calls 100 (100) | 3.4 (0) 16.4(0) 68.3 (0) 68.4 (0) 100 (100) 98.3 (83.9)\nT3: Displaying options 100 (100) | 8.0 (0) 8.0 (0) 58.8 (0) 64.9 (0) 74.9 (2.0) 74.9 (0)\nT4: Providing information 100 (100) | 9.5 (0) 17.8(0) 28.6 (0) 57.2 (0) 59.5 (3.0) 100 (100)\nTS: Full dialogs 100 (100) | 4.6 (0) 8.1 (0) 57.1 \u00a9) 75.4 (0) 96.1 (49.4) 93.4 (19.7)\nT1(OOV): Issuing API calls 100 (100) | 5.8 (0) 22.4(0) 44.1 (0) 60.0 (0) 72.3 (0) 96.5 (82.7)\nT2(OOV): Updating API calls | 100 (100) | 3.5 (0) 16.8(0) 68.3 (0) 68.3 (0) 78.9 (0) 94.5 (48.4)\nT3(OOV): Displaying options | 100 (100) | 8.3 (0) 8.3 (0) 58.8 (0) 65.0 (0) 744 (0) 75.2 (0)\nT4(OOV): Providing inform. 100 (100) | 9.8 (0) 17.2(0) 28.6 (0) 57.0 (0) 57.6 (0) 100 (100)\nTS(OOV): Full dialogs 100 (100) | 4.6 (0) 9.0 (0) 48.4 (0) 58.2 (0) 65.5 (0) 77.7 (0)\nT6: Dialog state tracking 2 33.3, (0) 1.6 (0) 1.6 \u00a9) 21.9 \u00a9) 226 (\u00a9) 41.1) 41.0 (0)\nConcierge) n/a 1.1(0.2) n/a 13.4(0.5) 14.6 (0.5) 16.7 (1.2) nial)\nOverall, while the methods we tried made some inroads into these tasks, there are still many challenges\nleft unsolved. Our best models can learn to track implicit dialog states and manipulate OOV words\nand symbols (T1-T2) to issue API calls and progress in conversations, but they are still unable tc\nperfectly handle interpreting knowledge about entities (from returned API calls) to present results tc\nthe user, e.g. displaying options in T3. The improvement observed on the simulated tasks e.g. where\nMemNNs outperform supervised embeddings which in turn outperform IR methods, is also seen or\nthe realistic data of T6 with similar relative gains. This is encouraging as it indicates that future work\non breaking down, analysing and developing models over the simulated tasks should help in the real\ntasks as well. Results on Concierge confirm this observation: the pattern of relative performances of\nmethods is the same on Concierge and on our series of tasks. This suggests that our synthetic datz\ncan indeed be used as an effective evaluation proxy.\nWe have introduced an open dataset and task set for evaluating end-to-end goal-oriented dialog\nlearning methods in a systematic and controlled way. We hope this will help foster progress of end-to\nend conversational agents because (i) existing measures of performance either prevent reproducibility\n(different Mechanical Turk jobs) or do not correlate well with human judgements\n\n(ii) the breakdown in tasks will help focus research and development to improve the learning methods\nand (iii) goal-oriented dialog has clear utility in real applications. We illustrated how to use the\ntestbed using a variant of end-to-end Memory Networks, which prove an effective model on these\ntasks relative to other baselines, but are still lacking in some key areas."}, {"section_index": "14", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Martin Raison, Alex Lebrun and Laurent Landowski for their helt\nwith the Concierge data."}, {"section_index": "15", "section_name": "REFERENCES", "section_text": "Bai, B., Weston, J., Grangier, D., Collobert, R., Sadamasa, K., Qi, Y., Chapelle, O., and Weinberger, K. (2009).\nSupervised semantic indexing. In Proceedings of ACM CIKM, pages 187-196. ACM.\nBanchs, R. E. (2012). Movie-dic: a movie dialogue corpus for research and development. In Proceedings of the\n50th Annual Meeting of the ACL.\nChen, Y.-N., Hakkani-Tiir, D., Tur, G., Gao, J., and Deng, L. (2016). End-to-end memory networks wit\nknowledge carryover for multi-turn spoken language understanding. In Proceedings of Interspeech.\nDahl, D. A., Bates, M., Brown, M., Fisher, W., Hunicke-Smith, K., Pallett, D., Pao, C., Rudnicky, A., and\nShriberg, E. (1994). Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the workshop\non Human Language Technology, pages 43-48. Association for Computational Lingu\nDodge, J., Gane, A., Zhang, X., Bordes, A., Chopra, S., Miller, A., Szlam, A., and Weston, J. (2016). Evaluatin\nprerequisite qualities for learning end-to-end dialog systems. In Proc. of ICLR.\nHenderson, M., Thomson, B., and Williams, J. (2014a). The second dialog state tracking challenge. In /5\u00a2/\nAnnual Meeting of the Special Interest Group on Discourse and Dialogue, page 263.\nHenderson, M., Thomson, B., and Young, S. (2014b). Word-based dialog state tracking with recurrent neural\nnetworks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue\n(SIGDIAL), pages 292-299.\nHixon, B., Clark, P., and Hajishirzi, H. (2015). Learning knowledge graphs for question answering througl\nconversational dialog. In Proceedings of the the 2015 Conference of the North American Chapter of th\u00e9\nAssociation for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA.\nIsbell, C. L., Kearns, M., Kormann, D., Singh, S., and Stone, P. (2000). Cobot in lambdamoo: A social statistics\nagent. In AAAJ/IAAI, pages 36-41.\nJafarpour, S., Burges, C. J., and Ritter, A. (2010). Filter, rank, and transfer the knowledge: Learning to cha\nAdvances in Ranking, 10.\nLowe, R., Serban, I. V., Noseworthy, M., Charlin, L., and Pineau, J. (2016). On the evaluation of dialogue\nsystems with next utterance classification. arXiv preprint arXiv: 1605.05414.\nMikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vecto\nspace. arXiv: 1301.3781.\nPietquin, O. and Hastie, H. (2013). A survey on metrics for the evaluation of user simulations. The knowledg\nengineering review, 28(01), 59-73.\nSordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J.-Y., Gao, J., and Dolan, B. (2015). A\nneural network approach to context-sensitive generation of conversational responses. Proceedings of NAACL\nSu, P.-H., Vandyke, D., Gasic, M., Kim, D., Mrksic, N., Wen, T.-H., and Young, S. (2015a). Learning from real\nusers: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems\narXiv preprint arXiv: 1508.03386.\n\nSu, P.-H., Vandyke, D., Gasic, M., Mrksic, N., Wen, T.-H., and Young, S. (2015b). Reward shaping with\nrecurrent neural networks for speeding up on-line policy learning in spoken dialogue systems. arXiv preprint\narXiv: 1508.03391.\nVinyals, O. and Le, Q. (2015). A neural conversational model. arXiv preprint arXiv: 1506.05869.\nWang, H., Lu, Z., Li, H., and Chen, E. (2013). A dataset for research on short-text conversations. In EMNLP.\nWen, T.-H., Gasic, M., Mrksic, N., Su, P.-H., Vandyke, D., and Young, S. (2015). Semantically conditionec\nIstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv: 1508.01745.\nWeston, J., Chopra, S., and Bordes, A. (2015a). Memory networks. Proceedings of ICLR.\nYoung, S., Gasic, M., Thomson, B., and Williams, J. D. (2013). Pomdp-based statistical spoken dialog systems:\nA review. Proceedings of the IEEE, 101(5), 1160-1179.\nKim, S., D\u2019Haro, L. F., Banchs, R. E., Williams, J. D., and Henderson, M. (2016). The fourth dialog state\ntracking challenge. In Proceedings of the 7th International Workshop on Spoken Dialogue Systems (IWSDS,\ntitter, A., Cherry, C., and Dolan, W. B. (2011). Data-driven response generation in social media. In Proceedings\nof the Conference on Empirical Methods in Natural Laneuage Processing.\nSerban, I. V., Sordoni, A., Bengio, Y., Courville, A., and Pineau, J. (2015a). Building end-to-end dialogue\nsystems using generative hierarchical neural network models. In Proc. of the AAAI Conference on Artificial\nIntelligence.\nshang, L., Lu, Z., and Li, H. (2015). Neural responding machine for short-text conversation. arXiv preprint\narXiv: 1503.02364.\nVang, Z. and Lemon, O. (2013). A simple and generic belief tracking mechanism for the dialog state trackin;\nchallense: On the believabilitv of observed information. In Proceedings of the SIGDIAL 20/3 Conference.\nNeston, J., Bordes, A., Chopra, S., and Mikolov, T. (2015b). Towards ai-complete question answering: a set ot\nprerequisite toy t: . arXiv preprint arXiv: 1502.05698."}, {"section_index": "16", "section_name": "\\ MEMORY NETWORKS IMPLEMENTATION", "section_text": "Storing and representing the conversation history As the model conducts a conversation with the\nuser, at each time step \u00a2 the previous utterance (from the user) and response (from the model) are appended to the\nmemory. Hence, at any given time there are cf,... cj user utterances and c{,...c_, model responses stored\n(i.e. the entire conversation)|\"| The aim at time t is to thus choose the next response c;. We train on existing\nfull dialog transcripts, so at training time we know the upcoming utterance c? and can use it as a training target\nFollowing|Dodge er al.| we represent each utterance as a bag-of-words and in memory it is represented as\na vector using the embedding matrix A, i.e. the memory is an array with entries:\nm = (A\u00ae(c;), AP(c1)..., AB(cr_1), AP(Ce_-1))\nAttention over the memory The last user utterance cj\u2018 is embedded using the same matrix A givin\nq = A\u00ae(c}), which can also be seen as the initial state of the controller. At this point the controller read\nfrom the memory to find salient parts of the previous conversation that are relevant to producing a respons\nThe match between qg and the memories is computed by taking the inner product followed by a softma:\npi= Softmax(u' mi), giving a probability vector over the memories. The vector that is returned back to th\ncontroller is then computed by o = R 5\u00b0, pim; where R is ad x d square matrix. The controller state is the\nupdated with g2 = o + q. The memory can be iteratively reread to look for additional pertinent informatio\nusing the updated state of the controller gz instead of g, and in general using q;, on iteration h, with a fixe\nnumber of iterations V (termed N hops). Empirically we find improved performance on our tasks with up to\n\nor 4 hops.\n@ = Softmax(qn+1 W\u00ae(y1),...,\u00a2n41 W\u00ae(yc)\nwhere there are C\u2019 candidate responses in y, and W is of dimension d x V. In our tasks the set y is a (large) se\nof candidate responses which includes all possible bot utterances and API calls.\nThe entire model is trained using stochastic gradient descent (SGD), minimizing a standard cross-entropy loss\nbetween G and the true label a."}, {"section_index": "17", "section_name": "B EXAMPLES OF PREDICTIONS OF A MEMORY NETWORK", "section_text": "and|6Jdisplay examples of predictions of the best performing Memory Network on full dialogs\nTask 5, (with 3 hops) on test examples of Tasks 1-4 along with the values of the attention over each memory\nfor each hop (p; as defined in Sec.[A). This model does not use match type features. Then, Table[7]displays al\nexample of prediction of the best performing Memory Network on Concierge (with 2 hops) on a test example\nalong with the values of the attention over each memory for each hop.\nTables[8]and[9]respectively display the values of the hyperparameters of the best Supervised Embeddings anc\nMemory Networks selected for each task. These models were selected using the best validation validation set:"}, {"section_index": "18", "section_name": "D_ ADDITIONAL RESULTS", "section_text": "Table[{10|provides results for additional variants of supervised embeddings, using either a dictionary that includes\nall bigrams to leverage some word order information, or match type features. On some tasks, supervised\nembeddings perform better when the last user utterance is used as sole input, without the full dialog history\n(see Table|8p. When no history is used, we slightly adapt match type features to only record type: a special\nword corresponding to type T (e.g., phone, address, etc) is appended to the representation of a candidate if the\nAPI calls are stored as bot utterances c;, and KB facts resulting from such calls as user utterances c}\u2019\nwhere \u00ae(-) maps the utterance to a bag of dimension V (the vocabulary), and A is a d x V matrix, where\nd is the embedding dimension. We retain the last user utterance c/ as the \u201cinput\u201d to be used directly in the\ncontroller. The contents of each memory slot m, so far does not contain any information of which speaker spoke\nan utterance, and at what time during the conversation. We therefore encode both of those pieces of information\nin the mapping \u00ae by extending the vocabulary to contain T = 1000 extra \u201ctime features\u201d which encode the\nindex i into the bag-of-words, and two more features that encode whether the utterance was spoken by the user\nor the model.\nTime | Locutor Dialog History Hop #1 | Hop #2 | Hop #3\n1 User hi 054 016 .020\n2 Bot hello what can i help you with today .040 024 008\n3 User may i have a table in paris .099 .137 141\n4 Bot i\u2019m on it 048 028 .004\n5 User <silence> 181 099 037\n6 Bot any preference on a type of cuisine 056 .090 014\n7 User i love indian food 1S9 .188\n\n8 Bot how many people would be in your party 051 022 010\n9 User we will be six 129 144 114\n10 Bot which price range are looking for 039 028 006\n11 User in a moderate price range please 128\n\n12 Bot ok let me look into some options for you 016 O11 003\nUser input <silence>\n\nCorrect answer api_call indian paris six moderate\n\nPredicted answer [ api_call indian paris six moderate [Correct]\nTable 4: Task 2 (Update API call) Out of the multiple memories from the current dialog, the mode\ncorrectly focuses on the 2 important pieces: the original API call and the utterance giving the update.\nTime | Locutor Dialog history Hop #1 | Hop #2 | Hop #3\n3 User may i have a table in paris .061 .072 .040\n4 Bot i\u2019m on it .026 O12 .001\n5 User <silence> .087 042 012\n6 Bot any preference on a type of cuisine .026 023 001\n7 User i love indian food .081 .070 055\n8 Bot how many people would be in your party 025 .006 001\n9 User we will be six .059 051 018\n10 Bot which price range are looking for 038 043 004\n11 User in a moderate price range please .080 095 096\n12 Bot ok let me look into some options for you 025 042 003\n13 User <silence> 127 069 032\n14 Bot api_call indian paris six moderate .062 113 043\n15 User instead could it be with french food 188\n\n16 Bot sure is there anything else to update 016 .007 001\n17 User no .028 .013 007\n18 Bot ok let me look into some options for you O11 .006 .000\nUser input <silence>\n\nCorrect answer api_call french paris six moderate\n\nPredicted answer [| api_call french paris six moderate [Correct]\nTable 3: Task 1 (Issue API call) The model learns to direct its attention towards the 4 memories containing\nthe information key to issue the API call. More hops help to strengthen this signal. <silence> is a special token\n\nused to indicate that the user did not speak at this turn \u2014 the model has to carry out the conversation with no\nadditional input.\nTable 5: Task 3 (Displaying options) The model knows it has to display options but the attention is wrong:\nit should attend on the ratings to select the best option (with highest rating). It cannot learn that properly and\n\nmatch type features do not help. It is correct here by luck, the task is not solved overall (see Tab. 2). We do not\nshow all memories in the table, only those with meaningful attention.\nTime | Locutor Dialog history Hop #1 | Hop #2 | Hop #3\n14 Bot api_call indian paris six moderate 012 000 .000\n15 User instead could it be with french food 067 103 147\n20 Bot api_call french paris six moderate 012 .000 .000\n21 User resto_l r_phone rest_l_phone .018 .004\n\n23 User resto_l r_cuisine french .029 .005\n\n24 User resto_l r_location paris .060\n\n25 User resto_l r_number six .050\n\n26 User resto_l r_price moderate .060 .090\n\n27 User resto_l r_rating 6 016 .002 .000\n30 User resto_2 r_cuisine french .031 .007 .000\n31 User resto_2 r_location paris .040 081 .004\n32 User resto_2 r_number six .020 .012 .000\n33 User resto_2 r_price moderate .029 .009 .000\n37 User resto_3 r_cuisine french .014 001 .000\n38 User resto_3 r_location paris .028 .016 001\n39 User resto_3 r_number six .024 .022 .004\n40 User resto_3 r_price moderate .039 015 001\nUser input <silence>\n\nCorrect answer what do you think of this option: resto_1\n\nPredicted answer | what do you think of this option: resto_1 [Correct]\nTable 6: Task 4 (Providing extra-information) The model knows it must display a phone or an address,\nbut, as explained in Section[A]the embeddings mix up the information and make it hard to distinguish between\ndifferent phone numbers or addresses, making answering correctly very hard. As shown in the results of Tab_[2]\nthis problem can be solved by adding match type features, that allow to emphasize entities actually appearing in\nthe history. The attention is globally wrong here.\n\u2018ime | Locutor Dialog history Hop #1 | Hop #2 | Hop #3\n4 Bot api_call indian paris six moderate 006 000 .000\n5 User instead could it be with french food 024 O11 .007\n0 Bot api_call french paris six moderate 005 .000 .001\nAl User resto_l r_phone resto_l_phone O11 .005 .004\n2 User resto_l r_address resto_l_address .018 .004 001\n3 User resto_l r_cuisine french .018 .003 001\n4 User resto_l r_location paris .068 091 108\n5 User resto_l r_number six 086 .078 .020\n6 User resto_l r_price moderate .070\n\n7 User resto_l r_rating 6 .014 .006 008\n8 User resto_2 r_phone resto_2_phone .015 .009 .006\nA) User resto_2 r_address resto_2_address .014 .004 001\n} User resto_2 r_location paris .075 .176\n\nJ User resto_2 r_number six 100 126 .026\n; User resto_2 r_price moderate .038 .090 -167\n; User resto_3 r_phone resto_3_phone .004 001 O01\n; User resto_3 r_address resto_3_address .005 .002 001\n} User resto_3 r_location paris .028 .028 .026\n} User resto_3 r_number six .039 013 002\n0) User resto_3 r_price moderate .018 .008 .013\n2, Bot what do you think of this option: resto_1 074 001 .000\n3 User let\u2019s do it 032 .004 .001\n4 Bot great let me do the reservation 003 .000 .000\nJser input do you have its address\n\nCorrect answer\n\nhere it is resto_1_address\n\n*redicted answer\n\nhere it is: resto_8_ address\n\n[Incorrect]\nTable 7: Concierge Data The model is also able to learn from human-human dialogs. <person>, <org>\n<number> and <date> are special tokens used to anonymize the data. We report the top 5 answers predicted by\n\nthe model. They are all semantically equivalent. Note that the utterances, while all produced by humans, are not\nperfect English (\"rservation\", \"I'll check into it\")\nTable 8: Hyperparameters of Supervised Embeddings. When Use History is True, the whole\nconversation history is concatenated with the latest user utterance to create the input. If False, only the latest\nutterance is used as input.\nTable 9: Hyperparameters of Memory Networks. The longer and more complex the dialogs are, the\nmore hops are needed.\nTask Learning Rate | Margin m | Embedding Dim d | Negative Cand. N | Nb Hops\nTask 1 0.01 0.1 128 100 1\nTask 2 0.01 0.1 32 100 1\nTask 3 0.01 0.1 32 100 3\nTask 4 0.01 0.1 128 100 2\nTask 5 0.01 0.1 32 100 3\nTask 6 0.01 0.1 128 100 4\nConcierge 0.001 0.1 128 100 2\nTime | Locutor | Dialog History Hop #1 | Hop #2\n\n1 User hey concierge 189 095\n\n2 User could you check if i can get a rservation at <org> <date> for brunch 178\n\n3 User <number> people 197 142\n\n4 User <silence> .187 167\n\n5 Bot hi <person> unfortunately <org> is fully booked for <date> | .225 | RBTON\nand there\u2019s <number> people on the waiting list\n\nUser input when\u2019s the earliest availability\n\nCorrect answer ill check\n\nPred. answer #1 | i\u2019monit [Incorrect]\n\nPred. answer #2 | i'll find out [Incorrect]\n\nPred. answer #3 | i'll take a look [Incorrect]\n\nPred. answer #4 | i'll check [Correct]\n\nPred. answer #5 | _ i'll check into it\n\n[Incorrect]\nTask Learning Rate | Margin m | Embedding Dim d | Negative Cand. N | Use History\nTask 1 0.01 0.01 32 100 True\nTask 2 0.01 0.01 128 100 False\nTask 3 0.01 0.1 128 1000 False\nTask 4 0.001 0.1 128 1000 False\nTask 5 0.01 0.01 32 100 True\nTask 6 0.001 0.01 128 100 False\nConcierge 0.001 0.1 64 100 False\nndidate contains a word that appears in the knowledge base as an entity of type T, regardless of whether the\nsame word appeared earlier in the conversation. As seen on Table[10| match type features improve performance\non out-of-vocabulary tasks | and 5, bringing it closer to that of Memory Networks without match type features\nout still quite lagging Memory Networks with match type features. Bigrams slightly hurt rather than help\nserformance, except in Task 5 in the standard in-vocabulary setup (performance is lower in the OOV setup).\nTable 10: Test results across all tasks and methods. For tasks T1-T5 results are given in the standard\nsetup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been seen\nduring training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Best\nperforming methods (or methods within 0.1% of best performing) are given in bold for the per-response accuracy\nmetric, with the per-dialog accuracy given in parenthesi\nSupervised Embeddings\n\nMemory Networks\n\nTask no match type + match type + bigrams no match type + match type\nno bigram no bigram no match type\nTI: Issuing API calls 100 (100) 83.2 (0) 98.6 (92.4) 99.9 (99.6) 100 (100)\n\nT2: Updating API calls 68.4 (0) 68.4 (0) 68.3 (0) 100 (100) 98.3 (83.9)\nT3: Displaying options 64.9 (0) 64.9 (0) 64.9 (0) 74.9 (2.0) 74.9 (0)\nT4: Providing information 57.2 (0) 57.2 (0) 57.3 (0) 59.5 (3.0) 100 (100)\nTS: Full dialogs 75.4 (0) 76.2 (0) 83.4 (0) 96.1 (49.4) 93.4 (19.7)\nTI(OOV): Issuing API calls 60.0 (0) 67.2 (0) 58.8 (0) 72.3 (0) 96.5 (82.7)\nT2(OOV): Updating API calls | 68.3 (0) 68.3 (0) 68.3 (0) 78.9 (0) 94.5 (48.4)\nT3(OOV): Displaying options | 65.0 (0) 65.0 (0) 62.1 (0) 74.4 (0) 75.2 (0)\nT4(OOV): Providing inform. | 57.0 (0) 57.1 (0) 57.0 (0) 57.6 (0) 100 (100)\nT5(OOV): Full dialogs 58.2 (0) 64.4 (0) 50.4 (0) 65.5 (0) 771.7 (0)\nT6: Dialog state tracking 2 22.6 (0) 22.1 (0) 21.8 (0) 41.1 (0) 41.0 (0)"}]
r10FA8Kxg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Cybenko proved that a network with a large enough single hidden layer of sigmoid units car\napproximate any decision boundary. Empirical work, however, suggests that it can be difficult tc\ntrain shallow nets to be as accurate as deep nets. Dauphin and Bengio|(2013) trained shallow nets\non SIFT features to classify a large-scale ImageNet dataset and found that it was difficult to train\nlarge, high-accuracy, shallow nets. A study of deep convolutional nets suggests that for vision tasks\ndeeper models are preferred under a parameter budget (e.g. (2014);\n\nSimonyan and Zisserman (2014 ;|Srivastava et al. (2015). Similarly, |Seide et al.|(2011) and|Geras\n\net al.|(2015) show that deeper models are more accurate than shallow models in speech acoustic\n\nmodeling. More recently, (2015) showed that it is possible to gain increases in accuracy\n\nin models with few parameters by training deeper, thinner nets (FitNets) to mimic much wider nets\n\nCohen and Shashua|(2016);|Liang and Srikant|(2016) suggest that the representational efficiency of\n\ndeep networks scales exponentially with depth, but it is unclear if this applies only to pathological!\nproblems. or is encountered in practice on data sets such as TIMIT and CIFAR.\n(2014), however, demonstrated that shallow nets sometimes can learn the functions\nlearned by deep nets, even when restricted to the same number of parameters as the deep nets. They\ndid this by first training state-of-the-art deep models, and then training shallow models to mimic\nthe deep models. Surprisingly, and for reasons that are not well understood, the shallow models\nlearned more accurate functions when trained to mimic the deep models than when trained on the\noriginal data used to train the deep models. In some cases shallow models trained this way were as\naccurate as state-of-the-art deep models. But this demonstration was made on the TIMIT speech\nrecognition benchmark. Although their deep teacher models used a convolutional layer, convolution\nis less important for TIMIT than it is for other domains such as image classification.\n(2014) also presented results on CIFAR-10 which showed that a shallow model\n\ncould learn functions almost as accurate as deep convolutional nets. Unfortunately, the results on\nCIFAR-10 are less convincing than those for TIMIT. To train accurate shallow models on CIFAR-10"}, {"section_index": "1", "section_name": "Do DEEP CONVOLUTIONAL NETS REALLY NEED TO\nBE DEEP AND CONVOLUTIONAL?", "section_text": "Gregor Urban\u2019, Krzysztof J. Geras?, Samira Ebrahimi Kahou\u00ae, Ozlem Aslan*, Shengjie Wang'\nAbdelrahman Mohamed\u00ae, Matthai Philipose\u00ae, Matt Richardson\u00ae, Rich Caruana\u00ae"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper we show that the methods Ba and Caruana used to train shallow students to mimic deey\nteacher models on TIMIT do not work as well on problems such as CIFAR-10 where multiple layer:\nof convolution are required to train accurate teacher models. If the student models have a simila\nnumber of parameters as the deep teacher models, high accuracy can not be achieved without multipl\nlayers of convolution even when the student models are trained via distillation.\nTo ensure that the shallow student models are trained as accurately as possible, we use Bayesian\noptimization to thoroughly explore the space of architectures and learning hyperparameters. Although\nthis combination of distillation and hyperparameter optimization allows us to train the most accurate\nshallow models ever trained on CIFAR-10, the shallow models still are not as accurate as deep\nmodels. Our results clearly suggest that deep convolutional nets do, in fact, need to be both deep and\nconvolutional, even when trained to mimic very accurate models via distillation (Hinton et al.\\|2015).\nn this paper, we revisit the CIFAR-10 experiments in/Ba and Caruana| Unlike in that worl\n1ere we compare shallow models to state-of-the-art deep convolutional models, and restrict th\n1umber of parameters in the shallow student models to be comparable to the number of parameters i\nhe deep convolutional teacher models. Because we anticipated that our results might be differen\nve follow their approach closely to eliminate the possibility that the results differ merely because c\nchanges in methodology. Note that the goal of this paper is nof to train models that are small or fa:\n\nnodels can be as accurate as deep convolutional models given the same parameter budget.\nThere are many steps required to train shallow student models to be as accurate as possible: train\nstate-of-the-art deep convolutional teacher models, form an ensemble of the best deep models, collect\nand combine their predictions on a large transfer set, and then train carefully optimized shallow\nstudent models to mimic the teacher ensemble. For negative results to be informative, it is important\nthat each of these steps be performed as well as possible. In this section we describe the experimental\nmethodology in detail. Readers familiar with distillation (model compression), training deep models\non CIFAR-10, data augmentation, and Bayesian hyperparameter optimization may wish to skip to the\nempirical results in Section[3}"}, {"section_index": "3", "section_name": "2.1 MODEL COMPRESSION AND DISTILLATION", "section_text": "The key idea behind model compression is to train a compact model to approximate the functior\nlearned by another larger, more complex model. showed how a single neural ne\nof modest size could be trained to mimic a much larger ensemble. Although the small neural net:\n\ncontained 1000x fewer parameters, often they were as accurate as the large ensembles they were\ntrained to mimic.\nModel compression works by passing unlabeled data through the large, accurate teacher model t\ncollect the re ued scores it predicts, and then training a student model to mimic these score:\nHinton et al. ) generalized the methods of [Bucila et al] (2006) and [Ba and Caruana} 2014\n\ny incorporating a parameter to control the relative importance of the soft targets provided by th\nteacher model to the hard targets in the original training data, as well as a temperature parameter tha\nregularizes learning by pushing targets towards the uniform distribution. [Hinton et al. (2015) als\ndemonstrated that much of the knowledge passed from the teacher to the student is conveyed as dari\n\nknowledge contained in the relative scores (probabilities) of outputs corresponding to other classes\nAs opposed to the scores given to just the output for the one correct class.\nSurprisingly, distillation often allows smaller and/or shallower models to be trained that are nearly\nas accurate as the larger, deeper models they are trained to mimic, yet these same small models are\nnot as accurate when trained on the 1-hot hard targets in the original training set. The reason for\nthis is not yet well understood. Similar compression and distillation methods have also successfully\nthey had to include at least one convolutional layer in the shallow model, and increased the number\nof parameters in the shallow model until it was 30 times larger than the deep teacher model. Despite\nthis, the shallow convolutional student model was several points less accurate than a teacher model\nthat was itself several points less accurate than state-of-the-art models on CIFAR-10.\nbeen used in speech recognition (e.g. (2015); |Geras et al.|(2015);|Li et al.|(2014)) anc\nreinforcement learning\u2019 ; (2016). .|(2015) showed thai\n\ndistillation methods can be used to train small students that are more accurate than the teacher model:\nby making the student models deeper, but thinner, than the teacher model.\nWe train shallow mimic nets using data labeled by an ensemble of deep teacher nets trained on the\noriginal 1-hot CIFAR-10 training data. The deep teacher models are trained in the usual way using\nsoftmax outputs and cross-entropy cost function. Following {Ba and Caruana] (2014), the student\nmimic models are not trained with cross-entropy on the ten p values where py, = e** 4; e@? outpul\nby the softmax layer from the deep teacher model, but instead are trained on the un-normalized log\nprobability values z (the logits) before the softmax activation. Training on the logarithms of predicted\nprobabilities (logits) helps provide the dark knowledge that regularizes students by placing emphasis\non the relationships learned by the teacher model across all of the outputs.\nAs in Ba and Caryana] O14 [Ba and Caruana] (2014), the student is trained as a regression problem given training data\nf(a x\n1\nLOW) = FY lige: W) = 218\nt\nwhere W represents all of the weights in the network, and g(a\u201c); W) is the model prediction on the\nt'\u201d training data sample.\nA shallow net has to have more hidden units in each layer to match the number of parameters in\na deep net. (2014) found that training these wide, shallow mimic models with\nbackpropagation was slow, and introduced a linear bottleneck layer between the input and non-linear\nlayers to speed learning. The bottleneck layer speeds learning by reducing the number of parameters\nthat must be learned, but does not make the model deeper because the linear terms can be absorbed\nback into the non-linear weight matrix after learning. See! for details. To match\ntheir experiments we use linear bottlenecks when training student models with 0 or 1 convolutional\nlayers, but did not find the linear bottlenecks necessary when training student models with more than\n1 convolutional layer."}, {"section_index": "4", "section_name": "2.4 BAYESIAN HYPERPARAMETER OPTIMIZATION", "section_text": "The goal of this work is to determine empirically if shallow nets can be trained to be as accurate as\ndeep convolutional models using a similar number of parameters in the deep and shallow models. Ii\nwe succeed in training a shallow model to be as accurate as a deep convolutional model, this provide:\nan existence proof that shallow models can represent and learn the complex functions learned by\ndeep convolutional models. If, however, we are unable to train shallow models to be as accurate a:\ndeep convolutional nets, we might fail only because we did not train the shallow nets well enough.\nIn all our experiments we employ Bayesian hyperparameter optimization using Gaussian process\nregression to ensure that we thoroughly and objectively explore the hyperparameters that govern\nlearning. The implementation we use is Spearmint (Snoek et al.|{2012). The hyperparameters we\noptimize with Bayesian optimization include the initial learning rate, momentum, scaling of the initial\nrandom weights, scaling of the inputs, and terms that determine the width of each of the network\u2019s\n\nlayers (i.e. number of convolutiona\noptimization can be found in Sections\n\nfilters and\n\n2.5][2.7](2.8\n\nneurons). More details of the hyperparameter\nand in the Appendix."}, {"section_index": "5", "section_name": "2.5 TRAINING DATA AND DATA AUGMENTATION", "section_text": "The CIFAR-10 (Krizhevsky}|2009) data set consists of a set of natural images from 10 different objec\nclasses: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. The dataset is a labelec\n\nsubset of the 80 million tiny images dataset (Torralba et al.|/2008) and is divided into 50,000 train anc\n10,000 test images. Each image is 32 x32 pixels in 3 color channels, yielding input vectors with 3072\ndimensions. We prepared the data by subtracting the mean and dividing by the standard deviation\nof each image vector. We train all models on a subset of 40,000 images and use the remaining\n10,000 images as the validation set for the Bayesian optimization. The final trained models only\nused 80% of the theoretically available training data (as opposed to retraining on all of the data after\nhyperparameter optimization).\nWe employ the HSV-data augmentation technique as described by Bnoek st a] Snoek et al.] (2015). aM\nwe shift hue, saturation and value by uniform random values: A; ~ U(\u2014Dnh, Py\n\nU(\u2014D,,Ds), Ay ~ U(\u2014Dy, Dy). Saturation and value values are scaled globally: a, ~\nie ,1+ As),ay ~ Ua 1+ A,). The five constants D;,,D,,D,,As,Av are treated\nas additional hyperparametersi in the Bayesian hyperparameter optimization.\nAll training images are mirrored left-right randomly with a probability of 0.5. The input images are\nfurther scaled and jittered randomly by cropping windows of size 2424 up to 32x32 at random\nlocations and then scaling them back to 32x32. The procedure is as follows: we sample an integer\nvalue S ~ U(24,32) and then a pair of integers x,y ~ U(0,32 \u2014 S). The transformed resulting\nimage is R = fspiine,3(I[@ : \u00ab + S,y : y + S]) with I denoting the original image and fspiine,3\ndenoting the 3rd order spline interpolation function that maps the 2D array back to 32 x32 (applied to\nthe three color channels separately).\nAll data augmentations for the teacher models are computed on the fly using different random seeds\nFor student models trained to mimic the ensemble (see Section[2.7]for details of the ensemble teache\nmodel), we pre-generated 160 epochs worth of randomly augmented training data, evaluated the\nensemble\u2019s predictions (logits) on these samples, and saved all data and predictions to disk. All studen\nmodels thus see the same training data in the same order. The parameters for HS V-augmentation it\nthis case had to be selected beforehand; we chose to use the settings found with the best single mode\n(D;, = 0.06, D, = 0.26, D, = 0.20, A, = 0.21, A, = 0.13). Pre-saving the logits and augmentec\ndata is important to reduce the computational cost at training time, and to ensure that all studen\nmodels see the same training data"}, {"section_index": "6", "section_name": "2.6 LEARNING-RATE SCHEDULE", "section_text": "We train all models using SGD with Nesterov momentum. The initial learning rate and momentum\nare chosen by Bayesian optimization. The learning rate is reduced according to the evolution of the\nmodel\u2019s validation error: it is halved if the validation error does not drop for ten epochs in a row. It is\nnot reduced within the next eight epochs following a reduction step. Training ends if the error did not\ndrop for 30 epochs in a row or if the learning rate was reduced by a factor of more than 2000 in total\nThis schedule provides a way to train the highly varying models in a fair manner (it is not feasible to\noptimize all of the parameters that define the learning schedule). It also decreases the time spent to\n\ntrain each model compared to using a hand-selected overestimate of the number of epochs to train\nthus allowing us to train more models in the hyperparameter search."}, {"section_index": "7", "section_name": "2.7 SUPER TEACHER: AN ENSEMBLE OF 16 DEEP CONVOLUTIONAL CIFAR-10 MODELS", "section_text": "One limitation of the CIFAR-10 experiments performed in (2014) is that the teacher\n\nmodels were not state-of-the-art. The best deep models they trained on CIFAR-10 had only 88%\naccuracy, and the ensemble of deep models they used as a teacher had only 89% accuracy. The\naccuracies were not state-of-the-art because they did not use augmentation and because their deepest\nmodels had only three convolutional layers. Because our goal is to determine if shallow models can\nbe as accurate as deep convolutional models, it is important that the deep models we compare to (and\nuse as teachers) are as accurate as possible.\nWe train deep neural networks with eight convolutional layers, three intermittent max-pooling layer:\nand two fully-connected hidden layers. We include the size of these layers in the hyperparameter\noptimization, by allowing the first two convolutional layers to contain from 32 to 96 filters each, the\nnext two layers to contain from 64 to 192 filters, and the last four convolutional layers to contain\nBecause augmentation allows us to generate large training sets from the original 50,000 images, we\nuse augmented data as the transfer set for model compression. No extra unlabeled data is required.\nfrom 128 to 384 filters. The two fully-connected hidden layers can contain from 512 to 1536 neurons.\nWe parametrize these model-sizes by four scalars (the layers are grouped as 2-2-4) and include the\n\nscalars in the hyperparameter optimization. All models are trained using Theano\n3010).\nVe optimize eighteen hyperparameters overall: initial learning rate on [0.01, 0.05], momentum o\n0.80, 0.91], ly weight decay on [5 - 10~\u00b0,4 - 10~4], initialization coefficient on [0.8, 1.35] whic\ncales the initial weights of the CNN, four separate dropout rates, five constants controlling th\n{SV data augmentation, and the four scaling constants controlling the networks\u2019 layer widths. Th\nearning rate and momentum are optimized on a log-scale (as opposed to linear scale) by optimizin\nhe exponent with appropriate bounds, e.g. LR = e~* optimized over x on [3.0, 4.6]. See th\n\\ppendix for more details about hyperparameter optimization.\nWe trained 129 deep CNN models with Spearmint. The best model obtained an accuracy of 92.78%.\nthe fifth best achieved 92.67%. See Table/I]for the sizes and architectures of the three best models.\nWe are able to construct a more accurate model on CIFAR-10 by forming an ensemble of multip!\ndeep convolutional neural nets, each trained with different hyperparameters, and each seeing slightl\ndifferent training data (as the augmentation parameters vary). We experimented with a number o\nensembles of the many deep convnets we trained, using accuracy on the validation set to select th\nbest combination. The final ensemble contained 16 deep convnets and had an accuracy of 94.0% ot\nthe validation set, and 93.8% on the final test set. We believe this is among the top published result\nfor deep learning on CIFAR-10. The ensemble averages the logits predicted by each model befor\nthe softmax layers.\nWe used this very accurate ensemble model as the teacher model to label the data used to train the\nshallower student nets. As described in Sectio the logits (the scores just prior to the final softma\u00bb\nlayer) from each of the CNN teachers in the ensemble model are averaged for each class, and the\naverage logits are used as final regression targets to train the shallower student neural nets.\n2.8 TRAINING SHALLOW STUDENT MODELS TO MIMIC AN ENSEMBLE OF DEEP\nCONVOLUTIONAL MODELS\nWe trained student mimic nets with 1, 3.14} 10 and 31.6 million trainable parameters on the\npre-computed augmented training data (Section [2.5) that was re-labeled by the teacher ensemble\n(Section|2.7). For each of the four student sizes we trained shallow fully-connected student MLPs\ncontaining 1, 2, 3, 4, or 5 layers of non-linear units (ReLU), and student CNNs with 1, 2, 3 or 4\nconvolutional layers. The convolutional student models also contain one fully-connected ReLU layer\nModels with zero or only one convolutional layer contain an additional linear bottleneck layer to\nspeed up learning (cf. Section 2.3). We did not need to use a bottleneck to speed up learning for the\ndeeper models as the number of learnable parameters is naturally reduced by the max-pooling layers\nThe student CNNs use max-pooling and Bayesian optimization controls the number of convolutiona\nfilters and hidden units in each layer. The hyperparameters we optimized in the student models are\ninitial learning rate, momentum, scaling of the initially randomly distributed learnable parameter:\nscaling of all pixel values of the input, and the scale factors that control the width of all hiddet\nand convolutional layers in the model. Weights are initialized as in|Glorot and Bengio|(2010). W\nintentionally do not optimize and do not make use of weight decay and dropout when training studen\nmodels because preliminary experiments showed that these consistently reduced the accuracy o\nstudent models by several percent. Please refer to the Appendix for more details on the individua\narchitectures and hyperparameter ranges.\nTable[1|summarizes results after Bayesian hyperparameter optimization for models trained on the\noriginal 0/1 hard CIFAR-10 labels. All of these models use weight decay and are trained with the\ndropout hyperparameters included in the Bayesian optimization. The table shows the accuracy of\nthe best three deep convolutional models we could train on CIFAR-10, as well as the accuracy of\n'3.16 = Sqrt(10) falls halfway between 1 and 10 on log scale.\nTable 1: Accuracy on CIFAR-10 of shallow and deep models trained on the original 0/1 hard clas:\nlabels using Bayesian optimization with dropout and weight decay. Key: c = convolution layer; my\n= max-pooling layer; fe = fully-connected layer; lfc = linear bottleneck layer; exponents indicat\nrepetitions of a layer. The last two models (*) are numbers reported by(Ba and Caruana (2014). The\nmodels with 1-4 convolutional layers at the top of the table are included for comparison with studen\nmodels of similar architecture in Table [2]. All of the student models in Table [2] with 1, 2, 3, and <\nconvolutional layers are more accurate than their counterparts in this table that are trained on th\noriginal 0/1 hard targets \u2014 as expected distillation yields shallow models of higher accuracy that\nshallow models trained on the original training data.\nModel Architecture # parameters | Accuracy\n1 conv. layer c-mp-lfc-fe 10M 84.6%\n2 conv. layer c-mp-c-mp-fe 10M 88.9%\n3 conv. layer c-mp-c-mp-c-mp-fe 10M 91.2%\n4 conv. layer c-mp-c-c-mp-c-mp-fc 10M 91.75%\nTeacher CNN 1** 76c?-mp-126c?-mp-148c!-mp-1200fc\u201d 5.3M 92.78%\nTeacher CNN 2\"\u00a2 96c?-mp-171c?-mp-128c4-mp-512fc\u201d 2.5M 92.77%\nTeacher CNN 3\u00b0\u00a2 54c?-mp-158c?-mp-189c4-mp-1044fc\u201d 5.8M 92.67%\nEnsemble of 16 CNNs c?-mp-c?-mp-c4-mp-fc\u201d 83.4M 93.8%\nTeacher CNN (*) 128c-mp-128c-mp-128c-mp-1k fe 2.1M 88.0%\nEnsemble, 4 CNNs (*) 128c-mp-128c-mp-128c-mp-1k fe 8.6M 89.0%\nTable 2: Comparison of student models with varying number of convolutional layers trained to mimic\nthe ensemble of 16 deep convolutional CIFAR-10 models in Table[f]. The best performing student\nmodels have 3\u20144 convolutional layers and 1OM-\u201431.6M parameters. The student models in this\ntable are more accurate than the models of the same architecture in Table[I]that were trained on the\noriginal 0/1 hard targets \u2014 shallow models trained with distillation are more accurate than shallow\nmodels trained on 0/1 hard targets. The student model trained by [Ba and Caruana| (2014) is shown in\nthe last line for comparison; it is less accurate and much larger than the student models trained here\nthat also have 1 convolutional layer.\n1M | 3.16M | 10M | 316M | 70M\nBottleneck, | hidden layer 65.8% | 68.2% | 69.5% | 70.2% -\n2 hidden layers 66.2% | 70.9% | 73.4% | 74.3% -\n3 hidden layers 66.8% | 71.0% | 73.0% | 73.9% -\n4 hidden layers 66.7% | 69.5% | 71.6% | 72.0% -\n5 hidden layers 66.4% | 70.0% | 71.4% | 71.5% -\n1 conv. layer, 1 max-pool, Bottleneck 84.5% | 86.3% | 87.3% | 87.7% -\n2 conv. layers, 2 max-pool 87.9% | 89.3% | 90.0% | 90.3% -\n3 conv. layers, 3 max-pool 90.7% | 91.6% | 91.9% | 92.3% -\n4 conv. layers, 3 max-pool 91.3% | 91.8% | 92.6% | 92.6% -\n\n~SNN-ECNN-MIMIC-30k 128c-p-I200L-30k\ntrained on ensemble (Ba and Caruana\\|2014)\n\n85.8%\nthe ensemble of 16 deep CNNs. For comparison, the accuracy of the ensemble trained by|Ba an\nCaruana|(2014)) is included at the bottom of the table.\nTable [2] summarizes the results after Bayesian\nhyperparameter optimization for student mod-\nels of different depths and number of parameters\ntrained on soft targets (average logits) to mimic\nthe teacher ensemble of 16 deep CNNs. For\ncomparison, the student model trained by\n\n(2014) also is shown.\nThe first four rows in Table|1|show the accuracy\nof convolutional models with 10 million param-\neters and 1, 2, 3, and 4 convolutional layers.\nThe accuracies of these same architectures with\nM, 3.16M, 10M, and 31.6M parameters when\ntrained as students on the soft targets predicted\nby the teacher ensemble are shown in Table\nComparing the accuracies of the models with 10\nmillion parameters in both tables, we see that\ntraining student models to mimic the ensemble\nleads to significantly better accuracy in every\ncase. The gains are more pronounced for shal-\nower models, most likely because their learn-\nable internal representations do not naturally\nlead to good generalization in this task when\ntrained on the 0/1 hard targets: the difference\nin accuracy for models with one convolutional\nlayer is 2.7% (87.3% vs. 84.6%) and only 0.8%\n(92.6% vs. 91.8%) for models with four convo-\nutional layers.\nFigure|{]summarizes the results in Table[2] for\nstudent models of different depth, number of\nconvolutional layers, and number of parame-\nters when trained to mimic the ensemble teacher\nmodel. Student models trained on the ensemble\nlogits are able to achieve accuracies previously\nunseen on CIFAR-10 for models with so few\nlayers. Also, it is clear that there is a huge gap\nbetween the convolutional student models at the\ntop of the figure, and the non-convolutional stu-\ndent models at the bottom of the figure: the most\naccurate student MLP has accuracy less than\n75%, while the least accurate convolutional stu-\ndent model with the same number of parameters\nbut only one convolutional layer has accuracy\nabove 87%. And the accuracy of the convolu-\ntional student models increases further as more\nlayers of convolution are added. Interestingly.\nthe most accurate student MLPs with no convo-\nlutional layers have only 2 or 3 hidden layers;\nthe student MLPs with 4 or 5 hidden layers are\nnot as accurate.\nComparing the student MLP with only one hidden layer (bottom of the graph) to the student CNN\nwith | convolutional layer clearly suggests that convolution is critical for this problem even wher\nmodels are trained via distillation, and that it is very unlikely that a shallow non-convolutional mode\nwith 100 million parameters or less could ever achieve accuracy comparable to a convolutional mode!\nIt appears that if convolution is critical for teacher models trained on the original 0/1 hard targets, i\nAccuracy\n\nCNN: 1 convolutional layer\nCNN: 2 convolutional layers\nCNN: 3 convolutional layers\nCNN: 4 convolutional layers\n\n\u2014\n\u2014\n\u2014\n\u2014\n\u2014\u2014 MLP: 1 hidden layer\n\u2014\n\u2014\n\u2014\n\u2014\n\nco\nS\n\nMLP: 2 hidden layer\nMLP: 3 hidden layer\nMLP: 4 hidden layer\n\nMLP: 5 hidden layer\n\n1 3 10 31\nNumber of Parameters [millions]\n\n|--- 727555000 8 =f\nFigure 1: Accuracy of student models with differ-\nent architectures trained to mimic the CIFAR10\nensemble. The average performance of the five\nbest models of each hyperparameter-optimization\nexperiment is shown, together with dashed lines\nindicating the accuracy of the best and the fifth\nbest model from each setting. The short horizontal\nlines at 10M parameters are the accuracy of mod-\nels trained without compression on the original 0/1\nhard targets.\nis likely to be critical for student models trained to mimic these teacher models. Adding depth to the\nstudent MLPs without adding convolution does not significantly close this \u201cconvolutional gap\u2019.\nFurthermore, comparing student CNNs with 1, 2, 3, and 4 convolutional layers, it is clear that CN}\nstudents benefit from multiple convolutional layers. Although the students do not need as many\nlayers as teacher models trained on the original 0/1 hard targets, accuracy increases significantly a\nmultiple convolutional layers are added to the model. For example, the best student with only on\nconvolutional layer has 87.7% accuracy, while the student with the same number of parameters (31M\nand 4 convolutional layers has 92.6% accuracy.\nOne pattern that is clear in the graph is that all student models benefit when the number of parameter:\nincreases from | million to 31 million parameters. It is interesting to note, however, that the larges:\nstudent (31M) with a one convolutional layer is less accurate than the smallest student (1M) with twc\nconvolutional layers, further demonstrating the value of depth in convolutional models.\nIn summary, depth-constrained student models trained to mimic a high-accuracy ensemble of deep\nconvolutional models perform better than similar models trained on the original hard targets (the\n\u201ccompression\u201d gaps in Figure[I), student models need at least 3-4 convolutional layers to have high\naccuracy on CIFAR-10, shallow students with no convolutional layers perform poorly on CIFAR-10,\nand student models need at least 3-10M parameters to perform well. We are not able to compress\ndeep convolutional models to shallow student models without significant loss of accuracy.\nWe are currently running a reduced set of experiments on ImageNet, though the chances of shallow\nmodels performing well on a more challenging problem such as ImageNet appear to be slim."}, {"section_index": "8", "section_name": "4 DISCUSSION", "section_text": "Interestingly, we noti\nThis surprised us, an\n\niced that mimic networks perform consistently worse when trained using dropou\nsuggests that training student models on the soft-targets from a teacher provide:\n\nsignificant regularization for the student models obviating the need for extra regularization method:\n\nsuch as dropout. This is consistent with the observation made by (\n\n) that studen\n\nmimic models did not seem to overfit. (2015) claim that soft targets convey mor\n\ninformation per sam\nsoft targets for other\nRomero et al.\n\nple than Boolean hard targets. The also suggest that the dark knowledge in th\nclasses further helped regularization, and that early stopping was unnecessary\n) extend distillation by using the intermediate representations learned by thi\n\nteacher as hints to guide training deep students, and teacher confidences further help regularization\nby providing a measure of sample simplicity to the student, akin to curriculum learning. In othe\n\nsuggest that the soft targets provided by a teacher provide a form o\n\nwork, |Pereyra et al.\nconfidence penalty that penalizes low entropy distributions and label smoothing, both of whicl\n\nimprove regularizati:\n\non by maintaining a reasonable ratio between the logits of incorrect classe:\nFigure[T]includes short horizontal lines at 1OM parameters indicating the accuracy of non-student\nmodels trained on the original 0/1 hard targets instead of on the soft targets. This \u201ccompression\ngap\u201d is largest for shallower models, and as expected disappears as the student models become\narchitecturally more similar to the teacher models with multiple layers of convolution. The benefits of\ndistillation are most significant for shallow models, yielding an increase in accuracy of 3% or more.\nAlthough we are not able to train shallow models to be as accurate as deep models, the models trained\nvia distillation are the most accurate models of their architecture ever trained on CIFAR-10. For\nexample, the best single-layer fully-connected MLP (no convolution) we trained achieved an accuracy\nof 70.2%. We believe this to be the most accurate shallow MLP ever reported for CIFAR-10 (in\ncomparison to 63.1% achieved by|Le et al.|(2013), 63.9% by|Memisevic et al.|(2015) and 64.3% by\nGeras and Sutton] (2015)). Although this model cannot compete with convolutional models, clearly\ndistillation helps when training models that are limited by architecture and/or number of parameters.\nSimilarly, the student models we trained with 1, 2, 3, and 4 convolutional layers are, we believe.\nthe most accurate convnets of those depths reported in the literature. For example, the ensemble\nteacher model in [Ba and Caruana] was an ensemble of four CNNs, each of which had 3\nconvolutional layers, but only achieved 89% accuracy, whereas the single student CNNs we train via\ndistillation achieve accuracies above 90% with only 2 convolutional layers, and above 92% with 3\nconvolutional layers. The only other work we are aware of that achieves comparable high accuracy\nwith non-convolutional MLPs is recent work by {Lin et al.|(2016). They train multi-layer Z-Lin\nnetworks. and use a powerful form of data augmentation based on deformations that we did not use.\nZhang et al.|(2016) question the traditional view of regularization in deep models. Although they dc\n\nnot discuss distillation, they suggest that in deep learning traditional function approximation appear:\nto be deeply intertwined with massive memorization. The multiple soft targets used to train studen\n\nmodels have a high information density 2015) and thus provide regularization by\nreducing the impact of brute-force memorization."}, {"section_index": "9", "section_name": "5 CONCLUSIONS", "section_text": "We train shallow nets with and without convolution to mimic state-of-the-art deep convolutiona\nnets. If one controls for the number of learnable parameters, nets containing a single fully-connectec\nnon-linear layer and no convolutional layers are not able to learn functions as accurate as deepe\nconvolutional models. This result is consistent with those reported in [Ba and Caruana|\nHowever, we also find that shallow nets that contain only 1-2 convolutional layers also are unabl.\nto achieve accuracy comparable to deeper models if the same number of parameters are used 11\nthe shallow and deep models. Deep convolutional nets are significantly more accurate than shallov\nconvolutional models, given the same parameter budget. We do, however, see evidence that mode\ncompression allows accurate models to be trained that are shallower and have fewer convolutiona\nlayers than the deep convolutional architectures needed to learn high-accuracy models from thi\noriginal 1-hot hard-target training data. The question remains why extra layers are required to trait\naccurate models from the original training data."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In N/PS, 2014.\nFr\u00e9d\u00e9ric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeror\nNicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning an:\nUnsupervised Feature Learning NIPS 2012 Workshop, 2012.\nJames Bergstra, Olivier Breuleux, Fr\u00e9d\u00e9ric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins,\nJoseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler\nIn SciPy, 2010.\nCristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In KDD, 2006\nNadav Cohen and Amnon Shashua. Convolutional rectifier networks as generalized tensor decomposition:\narXiv preprint arXiv: 1603.00162, 2016.\nGeorge Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signal:\nand Systems, 2(4):303\u2014314, 1989.\nYann N. Dauphin and Yoshua Bengio. Big neural networks waste capacity. arXiv: 1301.3583, 2013.\nKrzysztof J. Geras and Charles Sutton. Scheduled denoising autoencoders. In JCLR, 2015.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks.\nIn AISTATS, 2010.\nKaiming He, Xiangyu Zhang, Shaoging Ren, and Jian Sun. Deep residual learning for image recognition\narXiv: 1512.03385, 2015.\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv: 1503.02531,\n2015.\nAlex Krizhevsky. Learning multiple layers of features from tiny images, 2009.\nDavid Eigen, Jason Rolfe, Rob Fergus, and Yann LeCun. Understanding deep architectures using a recursive\nconvolutional network. In JCLR (workshop track), 2014.\nKrzysztof J. Geras, Abdel-rahman Mohamed, Rich Caruana, Gregor Urban, Shengjie Wang, Ozlem Aslan,\nMatthai Philipose, Matthew Richardson, and Charles Sutton. Blending LSTMs into CNNs. arXiv: 1511.06433,\n2015.\nJinyu Li, Rui Zhao, Jui-Ting Huang, and Yifan Gong. Learning small-size dnn with output-distribution-basec\ncriteria. In INTERSPEECH, 2014.\nShiyu Liang and R Srikant. Why deep neural networks? arXiv preprint arXiv: 1610.04161, 2016.\nEmilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforce\nment learning. In CLR, 2016.\nGabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey Hinton. Regularizing neura\nnetworks by penalizing output distributions. CLR, 2017.\nAdriana Romero, Ballas Nicolas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio\nFitNets: Hints for thin deep nets. JCLR, 2015.\nAndrei A. Rusu, Sergio Gomez Colmenarejo, Caglar Giilgehre, Guillaume Desjardi\nRazvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy di:\n\n, James Kirkpatrick,\nation. In JCLR, 2016.\nFrank Seide, Gang Li, and Dong Yu. Conversational speech transcription using context-dependent deep neura\nnetworks. In INTERSPEECH, 2011.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition\nIn ICLR, 2014.\nJasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning\nalgorithms. NIPS, 2012.\nRupesh K Srivastava, Klaus Greff, and Juergen Schmidhuber. Training very deep networks. In N/PS, 2015.\nAntonio Torralba, Robert Fergus, and William T. Freeman. 80 million tiny images: A large data set for\nnonparametric object and scene recognition. TPAMI, 30(11), 2008.\nChiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learnin,\nrequires rethinking generalization. arXiv preprint arXiv: 1611.03530, 2016.\nQuoc Le, Tamas Sarl\u00e9s, and Alexander Smola. Fastfood-computing hilbert space expansions in loglinear time.\nIn JCML, 2013.\nZhouhan Lin, Roland Memisevic, Shaoqing Ren, and Kishore Konda. How far can we go without convolution:\nImproving fully-connected networks. arXiv:1511.02580v1, 2016.\ntoland Memisevic, Kishore Konda, and David Krueger. Zero-bias autoencoders and the benefits of co-adaptins\nfeatures. In JCLR, 2015.\nTable 3: Optimization bounds for student models. (Models trained on 0/1 hard targets were describec\nin Sections|6.1 and|6.2}) Abbreviations: fe (fully-connected layer, ReLu), \u00a2 (convolutional, ReLu)\n\nlinear (fully-connected bottleneck layer, linear activation function), dependent (dependent variable\nchosen s.t. parameter budget is met).\n1\u00b0? layer 2\u201d7 layer 3\u00b0? layer 4!\u201d layer 5\u00b0\u201d layer\nNo conv. layer (IM) 500 - 5000 (fe) | dependent (linear)\nNo conv, layer (3.1M) | 1000 - 20000 (fc) | dependent (linear)\nNo conv, layer (10M) | 5000 - 30000 (fc) | dependent (linear)\nNo conv, layer (31M)_| 5000-45000 (fc) _| dependent (linear)\nT cony. layer (IM) 40-150) dependent (linear) | 200 - 1600 (fey\n1 cony. layer (3.1M) 50 - 300 (c) dependent (linear) | 100 - 4000 (fc)\nI cony. layer (10M) 50 - 450 (c) dependent (linear) | 500 - 20000 (fc)\n1 cony. layer (31M) 200 - 600 (c) dependent (linear) | 1000 - 4100 (fc)\nZ conv. layers (IM) 20 - 120 (c) 20 - 120 (ce) dependent (fe\n2 conv, layers (3.1M) 50 - 250 (c) 20 - 120 (c) dependent (fc)\n2 conv. layers (10M) 50 - 350 (c) 20 - 120 (c) dependent (fc)\n2 cony, layers (31M) 50 - 800 (c) 20 - 120 (c) dependent (fc)\n3 conv. layers (IM) 20- 110 (cy 20- 110 (cy 20- 110 (c) dependent (fe)\n3 conv, layers (3.1M) 40 - 200 (c) 40 - 200 (c) 40 - 200 (c) dependent (fc)\n3 conv. layers (10M) 50 - 350 (c) 50 - 350 (c) 50 - 350 (c) dependent (fc)\n3 cony, layers (31M) 50 - 650 (c) 50 - 650 (c) 50 - 650 (c) dependent (fc)\n4 conv. layers (IM) 25 - 100 (ce) 25 - 100 (ce) 25 - 100 (ce) 25-100(c) | dependent (fey\n4 conv, layers (3.1M) 50 - 150 (c) 50 - 150 (c) 50 - 200 (c) 50-200(c) | dependent (fc)\n4 conv. layers (10M) 50 - 300 (c) 50 - 300 (c) 50 - 350 (c) 50-350(c) | dependent (fc)\n4 cony. layers (31M) 50 - 500 (c) 50 - 500 (c) 50 - 650 (c) 50 - 650(c) _| dependent (fc)\nModels in the first four rows in Table[T]are trained similarly to those in Section|6.1] and are architecturally\nequivalent to the four convolutional student models shown in Table[2|with 10 million parameters. The following\nhyperparameters are optimized: initial learning rate [0.0015, 0.025] (optimized on a log scale), momentum\n[0.68, 0.97] (optimized on a log scale), constants C1, C2 \u20ac [0, 1] that control the number of filters or neurons\nin different layers, and up to four different dropout rates DOc1 \u20ac [0.05, 0.4], DOc2 \u20ac [0.1,0.6],DOc3 \u20ac\n(0.1, 0.7], DOF, \u20ac [0.1, 0.7] for the different layers. Weight decay was set to 2 - 10~\u201c and we used the same\ndata augmentation settings as for the student models. We use 5 x5 convolutional filters, one nonlinear hidden\nlayer in each model and each max-pooling operation is followed by dropout with a separately optimized rate\nWe use 22 max-pooling except in the model with only one convolutional layer where we apply 3 x3 pooling as\nthis seemed to boost performance and reduces the number of parameters.\nWeights of trained nets are initialized as in{Glorot and Bengio] (2010). The models trained in Section [2.7\ncontain eight convolutional layers organized into three groups (2-2-4) and two fully-connected hidden layers.\nThe Bayesian hyperparameter optimization controls four constants C,, C2, C3, Hj; all in the range [0, 1] that\nare then linearly transformed to the number of filters/neurons in each layer. The hyperparameters for which\nranges were not shown in Section are: the four separate dropout rates (DOc1, DOc2, DOc3, DOF) and\nthe five constants D;,, Ds, Dv, A controlling the HSV data augmentation. The ranges we selected are\nDOc; \u20ac [0.1, 0.3], DOcz \u20ac [0.25, 0.35], DOcs \u20ac [0.3, 0.44], DOf: \u20ac (0.2, 0.65], DOf2 \u20ac [0.2, 0.65], Dn \u20ac\n(0.03, 0.11], Ds \u20ac [0.2, 0.3], Dy \u20ac [0.0, 0.2], As \u20ac [0.2,0.3], A, \u20ac [0.03, 0.2], partly guided by |Snoek et al.\n(2015) and visual inspection of the resulting augmentations.\nThe number of filters and hidden units for the models have the following bounds:\n\n| conv. layer: 50 - 500 filters, 200 - 2000 hidden units, number of units in bottleneck is the dependent variable.\n2 conv. layers: 50 - 500 filters, 100 - 400 filters, number of hidden units is the dependent variable.\n\n3 conv. layers: 50 - 500 filters (layer 1), 100 - 300 filters (layers 2-3), # of hidden units is dependent the variable.\n4 conv. layers: 50 - 300 filters (layers 1-2), 100 - 300 filters (layers 3-4), # of hidden units is the dependent\nvariable.\nAll convolutional filters in the model are sized 3 x3, max-pooling is applied over windows of 2x2 and we use\nReLU units throughout all our models. We apply dropout after each max-pooling layer with the three rates\nDOc,.DOce2, DOcs and after each of the two fully-connected layers with the same rate DOf."}, {"section_index": "11", "section_name": "6.3. DETAILS OF TRAINING STUDENT MODELS OF VARIOUS DEPTHS ON ENSEMBLE LABELS", "section_text": "Our student models have the same architecture as models in Section| The model without convolutional layers\nconsists of one linear layer that acts as a bottleneck followed by a hidden layer of ReLU units. The following\nhyperparameters are optimized: initial learning rate [0.0013, 0.016] (optimized on a log scale), momenturr\n[0.68, 0.97] (optimized on a log scale), input-scale \u20ac [0.8, 1.25], global initialization scale (after initialization\n\u20ac (0.4, 2.0], layer-width constants C1, C2 \u20ac [0, 1] that control the number of filters or neurons. The exact range:\nfor the number of filters and implicitly resulting number of hidden units was chosen for all twenty optimizatior\nexperiments independently, as architectures, number of units and number of parameters strongly interact.\nFor the non-convolutional models we chose a slightly different hyper-parameterization. Given that all layers (in\nmodels with \u201ctwo layers\u201d or more) are nonlinear and fully connected we treat all of them similarly from the\nhyperparameter-optimizer\u2019s point of view. In order to smoothly enforce the parameter budgets without rejecting\nany samples from the Bayesian optimizer we instead optimize the ratios of hidden units in each layer (numbers\nbetween 0 and 1), and then re-normalize and scale them to the final number of neurons in each layer to match\nthe target parameter budget.\nFigure} similar to]1}but includes preliminary re-\nsults from experiments for models with 100M param.\neters. We are also running experiments with 300M\nparameters. Unfortunately, Bayesian optimization\non models with 100M and 300M parameters is even\nmore expensive than for the other points in the graph\nAs expected, adding capacity to the convolutional\nudents (top of the figure) modestly increases their\naccuracy. Preliminary results for the MLPs however\n(too preliminary to include in the graph) may not\nshow the same increase in accuracy with increasing\nmodel size. Models with two or three hidden layers\nmay benefit from adding capacity to each layer, but\nwe have yet to see any benefit from adding capacity\nto the MLPs with four or five hidden layers.\nAccuracy\n\ni\n\n\\\n\ncompression ga |\n\n85 \\\n\u2014 i)\n\n\\\n\n\\\n=e CNN: t convolutional layer}\n= CNN: 2convolutionallayers | &\n\u2014e= CNN: 3 convolutional layers\n\u2014\u2014 CNN: 4 convolutionallayers | &\n\n80} = \u2014e\u2014 MLP: 1 hidden layer 3\n\u2014\u2014 MLP: 2 hidden layers \u20188\n\u2014\u2014 MLP: 3 hidden layers 18\n\u2014\u2014= MLP: 4 hidden layers '\n\u2014- MLP: 5 hidden layers '\n\\\n\\\n75 \u2018\n\\\n70\ncompression gap\n65\n\n1 3 10 34 700\nNumber of Parameters [millions]\n\nFigure 2: See figure[I]"}]
Hk85q85ee
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "In this paper, we focus on the first problem and use dynamical system to analyze the nonlinea:\nsradient descent dynamics of certain two-layered nonlinear network in the following form:\nwhere o(x) = max(,0) is the ReLU nonlinearity. We consider the following setting: a student\nnetwork learns the parameters that minimize the Jy distance between its prediction and the super-\nvision provided by the teacher network of the same size with a fixed set of parameters w*. We\nassume all inputs x to follow Gaussian distribution and thus the network is bias-free. Eqn. {is\nhighly nonconvex and could contain exponential number of symmetrically equivalent solutions.\nTo analyze this, we first derive novel and concise gradient update rules for multilayer ReLU networks\n(See Lemma|2.1) in the teacher-student setting under J. loss. Then for K = 1, we prove that the\nnonlinear gradient dynamics of Eqn. [1] has a close form and converges to w* with at least (1 \u2014"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Deep learning has made substantial progress in many applications, including Computer Vision [He\net al. (2016); Simonyan & Zisserman](2015); (2015); Krizhevsky et al.|(2012)], Nat-\nural Language Processing [Sutskever et al. Speech Recognition (2012)).\nHowever, till now, how and why it works remains elusive due to a lack of theoretical understanding.\nFirst, how simple approaches like gradient descent can solve a very complicated non-convex opti-\nmization effectively. Second, how the deep models, especially deep convolutional models, achieve\ngeneralization power despite massive parameters.\nStudent\n\nNetwork\nw\n\nTeacher\nNetwork\n\nw\n\n~ layer \u00a2\n\nWe\n(d) + layer \u00a2\n1 1\nStudent Teacher\nNetwork Network We\nw w* We xX\n\nsees layer 041\ne)/2 probability, if initialized randomly with standard derivation on the order of 1/ Vd, verifyin:\ncommonly used initialization techniques [Glorot & Bengio| (2010); He et al. 2015); LeCun et al\n\n)],. When AK > 2, we prove that when the teacher parameters {w,};*., form orthonorma\nbases, (1) a symmetric initialization of a student network gets stuck at a saddle point and (2) unde\na certain symmetric breaking weight initialization, the dynamics converges to w*, without gettin,\nstuck into any local minima. Note that in both cases, the initialization can be arbitrarily close t\n\nthe origin for a fixed ||w*||, showing that such a convergence behavior is beyond the local conve:\nstructure at w*. To our knowledge, this is the first proof of its kind.\nPrevious works also use dynamical system to analyze deep neural networks. [Saxe et al.| (2013)\nanalyzes the dynamics of multilayer linear network, and [Kawaguchi] (2016)] shows every loca\n\nminima is global for multilinear network. Very little theoretical work has been done to analyze\n\nthe dynamics of nonlinear networks, especially deep ones. [Mei et al.](2016)] shows the globa\nconvergence when K = 1 with activation function (x) when its derivatives 07, 0\u201d, 0\u201d are boundec\n\nand o\u2019 > 0. Similar to our approach, (1996)] also uses the student-teacher setting anc\nanalyzes the dynamics of student network when the teacher\u2019s parameters w* forms a orthonoma\nbases; however, it uses (x) = erf(x) as the nonlinearity and only analyzes the local behaviors o:\nthe two critical points (the saddle point in symmetric initializations, and w*). In contrast, we prove\nthe global convergence behavior in certain symmetry-breaking cases."}, {"section_index": "2", "section_name": "2.1 NOTATION", "section_text": "Denote X as a N-by-d input data matrix and w* is the parameter of the teacher network with\ndesired N-by-1 output u = g(X;w*). Now suppose we have an estimator w and the estimated\noutput v = g(X;w). We want to know with I, loss E(w) = $|}u \u2014 v||? = $|ju\u2014 g(X;w)|l?,\nwhether gradient descent will converge to the desired solution w*.\nFigure 1: (a) We consider the student and teacher network as nonlinear neural networks with ReLU\nnonlinearity. The student network updates its weight w from the output of the teacher, whose\nweights w* are fixed. (b)-(c) The network structure we consider in K = 1 and K > 2 cases.\n(d) Notations used in multilayer ReLU gradient update rule (Sec.|2.2)\nMany previous works analyze nonlinear network based on the assumption of independent activa-\n\nions: the activations of ReLU (or other nonlinear) nodes are\n\nindependent of the input and/or mutu-\n\nlly independent. For example, [Choromanska et al.|( 2015ajb)] relate the nonlinear ReLU network\nwith spin-glass models when several assumptions hold, including the assumption of independent ac-\n\nivations (Alp and A5u). [Kawaguchi|(2016)] proves that eve!\n\nry local minimum in nonlinear network\n\ns global based on similar assumptions. [Soudry & Carmon|\n\n2016)] shows the global optimality of\n\nhe local minimum in a two-layered ReLU network, by assuming small sample size and applying\nndependent multiplicative Bernoulli noise on the activations. In practice, the activations are highly\n\njependent due to their common input. Ignoring such depen\n\nlency also misses important behaviors,\n\nind may lead to misleading conclusions. In this paper, no assumption of independent activation is\n\nnade. For sigmoid activation, (Fukumizu & Amari] (2000)]\n\n1 local minimum to be global when adding a new node to a\n2015)] gives guarantees on recovering the parameters of a 2-\nsor decomposition. In comparison, we analyze ReLU networ!\n\ns a more popular setting in practice.\n\ngives quite complicated conditions fot\ntwo-layered network. [Janzamin et al.\nlayered neural network learnt with ten-\ns trained with gradient descent, which\nThe paper is organized as follows. Sec. |2]introduces the basic formulation and some interesting\nnovel properties of ReLU in multilayered ReLU networks. Sec.|3/and Sec. Athen analyze the two-\nlayered model Eqn. [I]for kK =1and Kk > 2, respectively. Sec.|5/shows that simulation results are\nconsistent with theoretical analysis. Finally Sec.{/7)gives detailed proofs for all theorems.\nThe gradient descent update is w+) = w) + nAw), where Aw) = \u2014VE(w),. If we le\n7 \u2014 0, then the update rule becomes a first-order differential equation dw/dt = \u2014V E(w), or more\nconcisely, w = \u2014VE(w). In this case, F = VE(w)\u2122w = \u2014||VE(w)|l? < 0, ie., the functior\nvalue E is nonincreasing over time. The key is to check whether there exist other critical point:\nw # w* so that VE(w) = 0.\nE [z| = -E[VE(w)'VE(w)] < -E[VE(w)]\" E[VE(w)] < 0\nIn this paper, we discover a few useful properties of ReLU that make our analysis much simpler.\nDenote D = D(w) = diag(Xw > 0) as a N-by-N diagonal matrix. The /-th diagnonal element of\nD is a binary variable showing whether the neuron is on for sample /. Using this notation, we could\nwrite o(Xw) = DXw. Note that D only depends on the direction of w but not its magnitude.\nNote that for ReLU, D is also \u201ctranparent\u201d on derivatives. For example, the Jacobian Jy [o(Xw)] =\no'(Xw)X = DX at differentiable regions. This gives a very concise rule for gradient descent in\nReLU network: suppose we have negative gradient inflow vector g (of dimension N-by-1) on the\ncurrent ReLU node with weights w, then we can simply write the update Aw as:\nLemma 2.1 For neural network with ReLU nonlinearity and using lz loss to match with a teacher\n\nnetwork of the same size, the negative gradient inflow g; for node j at layer c has the following\nfnrmye\nThe intuition here is to start from g = u \u2014 v (true for lz loss) at the top layer and use induction.\nWith this formulation, we could write the finite dynamics for w, (all parameters in layer c). Denote\n\nthe N-by-deyide matrix Re = [Lj Dj) je{jXe and RE = [L* DF] je[jXZ. Using gradient descent\nrules:\nAw;\n\nXIDjg; = XIDjL; Do BD Kew\n\nXD; Lj (Rowe \u2014 Rw.)\n\nSo Lj Dj Xews\nJ\nAw, = RI (Riws \u2014 R-w-)\nAw) = XTDOU gO = XTD (D*Xw* \u2014 DY Xw)\nNote here how the notation of D\u201c) comes into play (and D D\u00ae = D), Indeed, when the neuron\nis cut off at sample J, then (D\u201c)),, is zero and will block the corresponding gradient component.\nIn our analysis, we assume entries of input X follow Gaussian distribution. In this situation, the gra-\ndient is a random variable and Aw = \u2014E[VE(w)]. The expected E [E(w)] is also nonincreasing\nno matter whether we follow the expected gradient or the gradient itself, because\nAw = Jy|o(Xw)|'g = X\u2122Dg\nThis can be easily applied to multilayer ReLU network. Denote j \u20ac [c] if node j is in layer c,\nd, as the width of layer c, and uj; and vj; as the output of teacher network and student network,\nrespectively. A simple deduction yields the following lemma:\ni he\n\nju \u2014\n\nLyvj)\nwt) = wl 4 EXTX(w\" \u2014w\"))\nThen ep should we analyze it? Notice that in aw both of the two terms have the form F'(e, w) =\nXT D(e)D(w)Xw. Using this form, E [Aw] = E[F(w/||w||, w*)| \u2014 E[F(w/||w||, w)]. Here e\nisa ate vector called the \u201cprojected\u201d weight. In the following, we will show that E[F'(e, w)| has\nthe following close form under i.i.d Gaussian assumption on _X:\nE[F(e,w)] = x [( \u2014 6)w + ||w|| sin Be]\nNote that the expectation analysis smooths out the non-differentiable property of ReLU, leaving\nonly one singularity at e = 0. The intuition is that expectation analysis involves an integration ovet\nthe data distribution. With simple algebraic manipulation, E [Aw] takes the following closed form:\nE[Aw] = Sw \u2014w)t+ x (asin Ow \u2014 Ow\")\na\nSee Appendix for the proof. The intuition is to represent V as a 2-by-2 bilinear form of vectot\n[||w||, || w* ||], and the bilinear coefficient matrix is positive definite. One question arises: will the\nsame approach show the dynamics converges when the initial conditions lie outside the region Q), ir\nparticular for any region that includes the origin? The answer is probably no. Note that w = 0 is <\nsingularity in which Aw is not continuous (if approaching from different directions towards w = 0.\nAw is different). It is due to the fact that ReLU function is not differentiable at the origin. We could\nremove this singularity by \u201csmoothing out\u201d ReLU around the origin. This will yield Aw \u2014 0 wher\nw \u2014> 0. In this case, V(0) = 0 so Lyapunov method could only tell that the dynamics is stable bu\nnot convergent. Note that for ReLU activation, o\u2019(x) = 0 for certain negative x even after a local\nsmoothing, so the global convergence claim in [Mei et al.|(2016)] for Iz loss does not apply.\nRandom Initialization. Then we study how to sample w')) so that w()) \u20ac Q. We would lik\nto sample within 2, but we don\u2019t know where is w*. Sampling around origin with big radiu:\nr > 2\\|w*|| is inefficient in particular in high-dimensional space. This is because when the sam\nple is uniform, the probability of hitting the ball is proportional to (7/||w*||)? < 274, which i:\nexponentially small.\nLinear case. In this situation D\\\u2019? = D* = I (no gating in either forward or backward propagation)\nNonlinear (ReLU) case. In this case, Aw = XTD(D* Xw* \u2014 DXw) in which D is a function of\nw. Intuitively, this term goes to zero when w \u2014 w%, and should be approximated to be X(w* -\nw) in the i.id Gaussian case, since roughly half of the samples are blocked. However, once we\nmake such approximation, we lost the nonlinear behavior of the network and would draw the wrong\nconclusion of global convergence.\nLemma 3.1 Denote F(e,w) = XT\u2122D(e)D(w)Xw where e is a unit vecto, X =\n[x1,X2,+++,Xy]? is N-by-d sample matrix and D(w) = diag(Xw > 0) is a binary diagonal\nmatrix. If x; ~ N(0,I) and are i.i.d (and thus bias-free), then:\nwhere a = ||w*||/||w|| and @ \u20ac [0, 7] is the angle between w and w*. The first term is expected\nwhile the last two terms show the nonlinear behavior. Using Lyapunov\u2019s method, we show that the\ndynamics (if treated continuously) converges to w* when w\\) \u20ac 2 = fw: |lw \u2014 w*|| < |lw*l]}:\nLemma 3.2. When w)) \u20ac Q = {w: the\nLyapunov function V(w) = 3\\\\w \u2014 w my has V <0 and the system is asymptotically stable and\nthus wt) + w* whent > +oo.\n| w\u2014w\" || < || w\"|\n\n(a)\n\nConvergent region (b)\n\n(c)\n\ne\nw*\nAn\n\nSample Successful samples\n\nregion\nFigure 2: (a) Sampling strategy to maximize the probability of convergence. (b) Relationship be-\ntween sampling range r and desired probability of success (1 \u2014 \u20ac)/2. (c) Geometry of K = 1 2D\ncase. There is a singularity at the origin. Initialization with random weights around the origin has\ndecent probability to converge to w*.\nA better idea is to sample around the origin with very small radius (but not at w = 0), so that\nthe convergent hypersphere behaves like a hyperplane near the origin, and thus almost half of the\nsamples is useful (Fig. |2{a)), as shown in the following theorem:\nThe intution here is to lower-bound the probability of the shaded area (Fig. 2{b)). From the proof\nthe conclusion could be made stronger to show r ~ 1/Vd, consistent with common initialization\ntechniques 2010); [He et al| 203); Q012)). Fig. Bc) shows a\nexample in the 2D case, in which there is a singularity at the origin, and sampling towards w* yield:\nthe convergence. This is consistent with the analysis above."}, {"section_index": "3", "section_name": "4 MULTIPLE RELUS CASE", "section_text": "Now we are ready to analyze the network g(x) = yan a(wjx) for Kk > 2 (Fig.|1{c)). Theoretical\nanalysis of such networks is also the main topic in many previous works [Saad & Solla| (1996);\n\nSoudry & Carmon|(2016);/Fukumizu & Amari) (2000)]. In this case, L; = L; =Iforl<j<K.\nen we have the follow\n\ning nonlinear dynamics trom Ean./7}\nQr\nwe [F(ws, wy, W5r)]|\n\n= (0 -0*)\n\nwin \u2014 (\n\n\u2014 6\nnT \u2014 OF\n\nIlwiell\n\nwry 4 (\n\nwll\nEqn. fl2)and its expected version) gives very complicated nonlinear dynamics and could be hard\nto solve in general. Unlike kK = 1, a similar approach with Lyaponov function does not yield a\ndecisive conclusion. However, if we consider the symmetric case: wj = Pjw and wj = Pjw*\nwhere P; is a cyclic permutation matrix that maps index j\u2019 + 1 to (j\u2019 + j mod Ky) + 1 (and P, is\n\nthe identity matrix), then RHS of the expected version of Egn.[12]can be simplified as follows:\nE [Aw;|\n\nLEU (w;,wyr,w})] = DO E[f(Pjw, Ppw, Pjw\")]\n\nve [f(Pyw, Pj Pyrw, Pj Pyrw*)| ({P)}7) is a group)\n\nP; SCE [f(w, Pyew, Pyrw\")] (||Pwil] = |]wall, Z(Pwi, Pwo) =\n\n\u201d\n\nj\nP;E(Awi]\n\nZ(wi, Wo,\n\n(1\nAw; =\n\nSo ow). wr)\n\nj=l\nwi] = 2B (w5, W5\", Wr) } = > ELF (Piw, Pjrw, Pjrw*)]\n\n= TE [f(Pyw, Pj Pyrw, Pj Pyrw\")| ({P) HL, isa group)\n= PES (w, Pw, Pyw\")] (\\|Pwa|| = |lwil], 20Pwi, Pwe) = Z(wy, wa)\n= P;E[Awi] (14)\nmE | a5] iG 6)(a \u20141+ (K - 1y)] Hl + E \u2019 ) +ofPy |i\n\n+ [(K -1)(asing* \u2014 sind) + asin] A\na= (2?+(K-1)y?)-?, cos@=ar, cos\u00a2 b = a? (Qary + (K \u20142)y?) (\nCorollary 4.2 For a bias-free two-layered ReLU network g(x; w) = yo o(w; Tx) that takes Gaus-\nsian i.i.d inputs (Fig. {| }. if the teacher\u2019s parameters (wi } form orthogonal bases, then when\nthe student parameters is initialized in the form of w; GQ) = = cows + yD, 143 Wj, where\n\n(a) yD) \u20ac QO = {x \u20ac (0,1],y \u20ac [0,1],2 > y}, then the dynamics (Eqn.{12) converges to\nw*} without being trapped into local minima.\n\u201c} without being trapped into local mini\nWhen symmetry is broken, since the closure of 2 includes the origin, there exists a path starting\nat arbitrarily small neighborhood of origin to w*, regardless of how large ||w*|| is. In contrast to\ntraditional convex analysis that only gives the local parameter-dependent convergence basin around\nw*, here we obtain a convergence basin that is parameter-independent. In comparison,\n\n6)] uses a different activation function (o(2) = erf(a)) and only analyzes local behaviors\nneal two fixed points (the symmetric saddle point and the teacher\u2019s weights w*), leaving sym-\nmetry breaking an empirical procedure. Here we show that it is possible to give global convergence\nanalvsis on certain svmmetrv breaking cases for two-lavered ReLU network.\nBy symmetry, Corollary [4.1] immediately suggests that when w) = y() a iwi t+ (a) \u2014\n\ny ws, then the dynamics will converge to Pj;w*. Since x > y but can be arbitrarily close, a\nslighest preturbation on the symmetric solution x = y leads to a different fixed point, which is a\npermutation of w*. This is very similar to Spontaneously Symmetric-Breaking (SSB) procedure in\nphysics, in which a high energy state with full symmetry goes to a low energy state and only retains\npart of the symmetry. In this case, the energy is the objective function F, the high energy state is the\ninitialization that is almost symmetrical but with small fluctuation, and the low energy state is the\nfixed point the dynamics converges into.\nwhich means that if all w; and w; are symmetric under the action of cyclic group, so does their\n\nexpected gradient. Therefore, the trajectory {w\u2018\u201d)} keeps such cyclic structure. Instead of solving a\nsystem of Ay equations, we only need to solve one:\nVhich means that if all w; and w: are symmetric under the action of cyclic group, so does their\nE [Aw] = .\n[Aw] = CE [f(w, Pyw, Pyw\")]\n\nj=l\nSurprisingly, there is another layer of symmetry in Eqn [15] when {w3} forms an orthonomal basis\n\n(wj,\"w; = 6;;\u201d). In this case, if we start with w ) = aw* + y Diz , P;w* then we could show\nthat the \u201ctrajectory keeps this structure and Eqn. [15] [15] can be further reduced into the following 2D\nnonlinear dynamics:\nin distribution) (\u00a9) Relative RMS error w.r.t #sample (Uniform distri.)\n04a\n\n(a) Distribution of relative RMS error on angle (b) Relative RMS error w.r-t #sample (Gau:\n\n07) 040\n\nt a=5\n06 nal nl | 035 fay |] 03s Fe\n\u00a3 ' 1 | 030 \u00bb olf o20 es\nBs 1 1 e-8 d=50 30 ad\n2 02s 02\nzo] 1 i\n020 020\n2 o3 I 4\n3 | i/o\" vot oxs\n\u00a302 1 0.10 0.10\non 1 a 0.05 0.05 _ $4\n000\nogo 05 \u00ab10 ~\u00ab15~=\u00ab20~=\u00ab25 30 0.00 or 0\u00b0 10 0\u00b0 107 10 0\u00b0 10 0\u00b0 107 ie 10\" Ty 10 0\"\n\nAngle (in radius) #Samples #Samples #Samples\nFigure 3: (a) Distribution of relative RMS error with respect to 9 = Z(w,e). (b) Relative RM\nerror decreases with sample size, showing the asympototic behavior of the close form expressio\nEqn. (c) Eqn. [10] also works well when the input data X are generated by other zero-mea\ndistribution X, e.g., uniform distribution in [\u20141/2, 1/2].\n(b)_ Vector field in (x, y) plane (K = 5) (0) Trajectory in (x, y) plane.\n\nSt saddle points\n\n(d) Convergence\nsae 4: (a)-(b) Vector field in (#, y) plane following 2D dynamics (Eqn.|16) for A = 2 anc\n4K = 5. Saddle points are visible. The parameters of teacher\u2019s network are at ro) (c) Trajectory\nin (a, \u00bb) plane for K = 2, K = 5, and K = 10. All trajectories start from \u2018a 0\" 3.0). Even the\nstarting point are aligned with w*, gradient descent dynamics takes detours. (d) Training curve\nWhen Kx is larger the convergence is faster.\nFrom the simulation shown in Fig.|4} we could see that gradient descent takes a detour to reach th\ndesired solution w*, even when the initialization is aligned with w*. This is because in the firs\nstage, all ReLU nodes receive the residue and try to explain the data in the same way (both x an\ny increases); when the \u201cobvious\u201d component has been explained away, then the residue change\nits direction and pushes some ReLU nodes to explain other components as well (x increases but ;\ndecreases).\nEmpirically this path also converges to w* under noise. We leave it a conjecture that the system con-\nverges in the presence of reasonably large noise. If this conjecture is true, then with high probability\na random initialization stays in the convergence basin and converges to a permutation of w*. The\nreason is that a random initialization almost never gives ties. Without a tie, there exists one leading\ncomponent which will dominate the convergence.\nConjecture 4.3 When the initialization WY = 2 w* + yD Ly 4 wij, +, where \u20ac is Gaussian\n\nnoise and (a), y\u201c) \u00a9 Q, then the dynamics Eqn.|12|also converges to w* without trapped intc\nlocal minima.\nWe verify our close form expression of E[F'(e, w)] = E[XTD(e)D(w)Xw] (Eqn. [I0} with sim-\nulation. We randomly pick e and w so that their angle Z(e, w) is uniformly distributed in [0, 7].\nWe prepare the input data X with standard Gaussian distribution and compare the close form so-\nlution E [F(e, w)| with F'(e, w), the actual data term in gradient descent without expectation. We\nuse relative RMS error: err = ||E[F(e, w)] \u2014 F(e, w)||/||F'(e, w)||. As shown in Fig. B[a), The\nerror distribution on angles shows the properties of the close-form solution. For small 6, D(w) and\n(a) Distribution of relative RMS error on angle\n\n07)\n\nRelative RMS error\n\n(b) Relative RMS error w.rt #sample (Gaussian distribution)\n\n(\u00a9) Relative RMS error wart #sample (Uniform distri.)\n\n040 040\n7 rey a3\nPl nl J 035 FF iat |] 035 HA\n' 1 | 030 || 030\n1 1 2\n02s 025\n| \"1 a20 020\n} 4\n015 01s\n| q\n\\ 010 010\n0.05 0.05\n000 |, 000\nRelative RMS error\n\nRelative RMS error\n\n08\n\n06\n\n04\n\n02\n\n0.0\n\n19\n\n08\n\n06\n\n04\n\n02\n\n0.0\n\n10 10 10\n0.5, top-w os noise = 1.0, top-w= 1 oa os\n06 06 06\n04 oa noise = 1.5, top-w=1 oa noise = 2.0, top-w=1\nwe 02 02 02\noo oo oo\no 2 406080 100 2% 40 60 80 100 0 2 4 60 #0 100 0 20 40 60 80 100\n\u2018eration \u2018iteration \u2018eration \u2018eration\n10\nnoise =0.5,top-wE[1,2] | gg] noise =0.5, top-w \u00a9 [0.1, 1.1]\no2| Noise =0.5,top-wE[0.01,0.11]] o> noise = 0.5, top-w ~ N(0, 1)\no 2% 46> 80 100 a a a a ) 2 40-60-80 100\n\niteration\n\n#iteration\n\n\u2018lteration\n\niteration\nFigure 5: Top row: Convergence when the initial weights deviates from symmetric initialization:\nw')) = 10-3w* + \u00ab. Here \u20ac ~ N(0, 107% * noise). The 2-layered network converges to w* until\nvery large noise is present. Both teacher and student networks use g(x) = an a(w]x). Each\nexperiment has 8 runs. Bottom row: Convergence when we use go(x) = an ajo(w]x). Here\n\nthe top weights a, is fixed at different numbers (rather than 1). Large positive a; correponds to fast\nconvergence. When a, has positive/negative components, the network does not converge to w*.\nFig. Bla) shows that the close form expression becomes more accurate with more samples. We also\nexamine other zero-mean distributions of X, e.g., uniform distribution in [\u20141/2, 1/2]. As shown in\nFig. Bf), the close form expression still works for large d, showing that it could be quite general.\nNote that the error is computed up to a scaling constant, due to the difference in normalization\nconstants among different distributions. We leave it to the future work to prove its usability for\nbroader distributions.\nFig.|4]a) and (b) shows the 2D vector field given by the 2D dynamics (Eqn. [I6) and Fig. 4c) show:\nthe 2D trajectory towards convergence to the teacher\u2019s parameters w*. Interestingly, even when we\ninitialize the weights as (10~\u00b0, 0), aligning with w\u2019*, the gradient descent takes detours to reach the\ndestination. One explanation is, at the beginning all nodes move similar direction trying to explair\nthe data, once the data have been explained partly, specialization follows (y decreases)."}, {"section_index": "4", "section_name": "6 CONCLUSION AND FUTURE WORK", "section_text": "In this paper, we analyze the nonlinear dynamical behavior of certain two-layered bias-free ReLL\nnetworks in the form of g(x; w) = yan o(w x), where o = max(x,0) is the ReLU node. We\nassume that the input x follows Gaussian distribution and the output is generated by a teacher net.\nwork with parameters w*. In AK = 1 we show a close-form nonlinear dynamics can be obtained anc\nits convergence to w* can be proven, if we sample the initialization properly. Such initialization is\nconsistent with common practice [Glorot & Bengio}| (2010); |He et al. (2015)] and is independent o!\nthe value of w*. For K > 2, when the teacher parameters {w;} form a orthonormal bases, we prove\nthat the trajectory from symmetric initialization is trapped into a saddle point, while certain sym.\nmetric breaking initialization converges to w* without trapped into any local minima. Future work\nincludes analysis of general cases (or symmetric case plus noise) for K > 2, and a generalization tc\nmultilayer ReLU (or other nonlinear) networks.\nRelative RMS error.\n\nRelative RMS error.\n\n08\n\n06\n\n04\n\n02\n\n0.0\n\n19\n\n08\n\n06\n\n04\n\n02\n\n0.0\n\n10\nnoise = 0.5, top-w os\n06\n04 oa noise=15,top-w=1 | 04 noise = 2.0, top-w=1\nwe 02 02 02\noo oo oo\no 2 406080 100 2 40-60-80 100 2% 4060 80 100 0 20. 40 60 80 100\n\u2018eration \u2018iteration \u2018eration \u2018eration\n10\nnoise = 0.5, top-w E[1, 2] og} noise =0.5, top-w \u20ac (0.1, 1.1]\noal noise =0.5,top-wE[0.01,0.11]| 92 noise = 0.5, top-w ~N(0, 1)\n2% 460-80 100 2 aed 80100 2% 460-80 100 2 40-60-80 100\nFig. [5] shows empirical convergence for A > 2, when the initialization deviates from symmetric\ninitialization in Thm. Unless the deviation is large, gradient descent converges to w*. We\nalso check the convergence of a more general network g2(x) = an ajo(w}x). When a; > 0\nconvergence follows; however, when some a; is negative, the network does not converge to w*,\neven that the student network already knows the ground truth value of {a; }*"}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectifiers: Sur-\npassing human-level performance on imagenet classification. In Proceedings of the IEEE Inter-\nnational Conference on Computer Vision, pp. 1026-1034, 2015.\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoging, and Sun, Jian. Deep residual learning for image\nrecognition. Computer Vision anad Pattern Recognition (CVPR), 2016.\nHinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep\nSenior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural net\nworks for acoustic modeling in speech recognition: The shared views of four research groups\nIEEE Signal Processing Magazine, 29(6):82\u201497, 2012.\nJanzamin, Majid, Sedghi, Hanie, and Anandkumar, Anima. Beating the perils of non-convexity\nGuaranteed training of neural networks using tensor methods. CoRR abs/1506.08473, 2015.\nKawaguchi, Kenji. Deep learning without poor local minima. Advances in Neural Informatior\nProcessing Systems, 2016.\nLeCun, Yann A, Bottou, L\u00e9on, Orr, Genevieve B, and Miiller, Klaus-Robert. Efficient backprop. In\nNeural networks: Tricks of the trade, pp. 9-48. Springer, 2012.\nMei, Song, Bai, Yu, and Montanari, Andrea. The landscape of empirical risk for non-convex losses.\narXiv preprint arXiv: 1607.06534, 2016.\nSaxe, Andrew M, McClelland, James L, and Ganguli, Surya. Exact solutions to the nonlinear dy-\nnamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.\nSimonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale imag\u00a2\n\nrecnonitinn IJntornoatinngal Cnnferenre nn lonrnino Ronrecontatinne (ICTR) 90185\nSoudry, Daniel and Carmon, Yair. No bad local minima: Data independent training error guarantees\nfor multilayer neural networks. arXiv preprint arXiv: 1605.08361. 2016.\nSutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. Sequence to sequence learning with neural net-\nworks. In Advances in neural information processing systems, pp. 3104\u20143112. 2014.\nSzegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir,\nErhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions.\nIn Computer Vision and Pattern Recognition (CVPR), pp. 1-9, 2015.\nFukumizu, Kenji and Amari, Shun-ichi. Local minima and plateaus in hierarchical structures of\nmultilayer perceptrons. Neural Networks, 13(3):317\u2014327, 2000.\nSaad, David and Solla, Sara A. Dynamics of on-line gradient descent learning for multilayer neural\nnetworks. Advances in Neyral Information Processine Svstems. nn. 309\u2014308. 1996.\nHere we list all detailed proof for all the theorems.\nLemma 7.1 For neural network with ReLU nonlinearity and using lz loss to match with a teachei\nnetwork of the same size, the negative gradient inflow g; for node j at layer c has the following\nwhere L; and L= are N-by-N diagonal matrices. For any k \u20ac |e + 1], Ly = 0\nsimilarly for L*.\n\nj\u20acle] wy5p-D; Lj anc\nSetting Ly, = >>, wj,pD;L; and Lt = So,\n\n4\n\nD* L* (both are diagonal matrices), we thus have:\nSk = Dy Lj ap: \u2014 Ly =\nLp Up! Lp VE = *\n> pL Var = Lp, y Li Ug \u2014 Le Vee\nkl\n6>0\n\n0<0\n\nF teacher's\nparams\nigure 6: (a)-(b) Two cases in Lemmaf7.2| (c) Convergence analysis in the symmetric two-layered\ngj = Lj Do (Ljuy \u2014 Livy)\nProof We prove by induction on layer. For the first layer, there is only one node with g = u \u2014 v,\ntherefore L; = Lj, = I. Suppose the condition holds for all node j \u20ac [c]. Then for node k \u20ac [c+],\nwe have:\nJ J\n\n> wir Djgj = > wyr-DjL; (> Lay \u2014 > bi)\nJ it\n> wyprD;L; (= Li, > Dj wi AK! - > Ly > bymane\n\nj\n> wyprD;L; > Li,Dj, Wir Uk! ->) wykDjL; > Ly Dy > WR VEE\nj i kr j J ke\n\nJ\n\n> (= vats) (= 150i] Up \u2014 > (= wads) (= Lyons VE\n7 7\n\nki j ke\nE[F(e,w)] = x ((a \u2014 @)w + ||w]| sin be)\n\nTT\nProof Note that F' can be written in the following form:\n1 1 269 \u2014 sin2\u00a2 9 1 -\u2014cos2d\u00a2o 0\nRigo) = EJ> SO xxl] = 1\u2014cos2\u00a2) 2d9 +sin2\u00a2) 0\nN i:61\u20ac[0,\u00a20] An 0 0 2bola-2\ndo 1 \u2014sin2\u00a29 1-\u2014cos2\u00a2o9 O\nIg + \u2014 |1\u2014cos2\u00a29 sin 2\u00a29 0\n20 An 0 0 0\nKf (e,w)} = LV ela) \u2014 (0) ) w\n\nN \u2014sin2@ 1\u2014cos26 0) [ cos\u00e9\n\u2014_ (2 \u2014 0)w \u2014 ||w|| h \u2014 cos 20 sin 260 H - sna]\n4a 0\n\n0 0. 0\na ((@~ ew + wi [%9\"I)\n\nN ((7 \u2014 @)w + ||w]| sin be)\nE[F(e,w)] = N (R(a + 6) \u2014 R(0)) w *\n\n((a + 0)w \u2014 ||w|| sin de)\nF(e,w) = > xix} W\n\nix} e>0,xTw>0\n1 : ,\nR(\u00a2o) =E ls > =| = E[x;x]|\u00a2i \u20ac [0, do] P [4i \u20ac [0. do]\n#:6:\u20ac[0,\u00a20]\nrsing a\n\nFoo Feo 790 Theos\n| II | \u201c\"l(rsing reosd ... xq] p(r)p(@) ) TI (x, )rdrd\u00a2da3...da,\n0 -oo Jo aa\nLa\n\nk=3\nNotice that by abuse of notation, the # appears in Eqn. [20]is the absolute value and Eqn.|20|follows\nnn\nMe= 1 sin 20 + 20 \u2014 20 \u2014(2n \u2014 0) cos 6 \u2014 sin @\n~ 9 |\u2014(27 \u2014 0) cos 6 \u2014 sin@ Qn\ntdet (MZ) 2n(sin 20 + 2m \u2014 20) \u2014 [(2a \u2014 0) cos @ + sin 6\u201d\n2n(sin 20 + 20 \u2014 20) \u2014 [(20 \u2014 0)? cos\u201d 6 + (2m \u2014 6) sin 20 +s\n= (dn? \u20141)sin? 6 \u2014 476 + 476 cos\u201d 6 \u2014 6? cos\u201d 6 + O sin 20\n= (dn \u2014 4n0 \u2014 1)sin? 6 + 0 cos (2 sin 6 \u2014 6 cos 0)\nTheorem 7.4 The dynamics in Eqn. [17] converges to w* with probability at least (1 \u2014 \u20ac)/2, if the\ninitial value w\\) is sampled uniformly from B,. = {w : ||w|| <r} with:\ni iw'l\nVa-1(1)\nwhere V4(1) is the volume of the unit ball. Since the volume of d-dimensional unit ball is\nLemma 7.3 In the region Jw) \u2014 , the Lyapunov\n\nfunction V(w) = \u00a7 Llw \u2014w* ||? has V < Oand the system is asymptotically stable and thus wt)\nw* whent 3 oo.\nIn the following we will show that M is positive definite when @ \u20ac (0, 7/2]. It suffices to show that\nM,, > 0, Mo. > 0 and det(M) > 0. The first two are trivial, while the last one is:\nIA\n\n20\nd+1\n\nIlw\n\na\nl-e\n2\n\n1\n3 Val) \u20146Vg-1 > Valr)\nIA\n\nnlm\nSs\n\nS\n\u2018 jis\n\nz>0,0<s<1\nLemma 7.5 For \u00a2*, 0 and \u00a2 defined in Eqn.|17\n(1) 6, \u00a2* \u20ac [0, 7/2] and 0 \u20ac [0, 09) where 09 = arccos Te\n\n(2) cos@ = 1\u20140?(a \u2014y)? and sin @ = a(x \u2014 y)\\/2 \u2014 a2 (x \u2014 y)?.\n\n(3) &* > 6 (equality holds only when y = 0) and d* > 6.\n2) cos@ =1-\u2014a*(x\u2014y)* and sing = a(x \u2014 y)V/2 \u2014 a? (ax \u2014 y)?.\ncosd \u2014 a?(2xy + (K \u2014 2)y\")\ncos \u00e9* ay\n\na(2a + (KK \u2014 2)y) > a(a+(K -1)y) >1\nProof We discuss the three boundaries as follows:\nCase 1: y = 0,0 < x < 1, horizontal line. In this case, 9 = 0, @ = 1/2 and @* = 7/2. The\ncomponent of the dynamics in this line is:\nwhere I(x) = Jy t* \u201ce~ \"dt. So we have\n\nVall) T(d/2 + 1/2)\nVa-i(1) vr T(d/2 + 1)\n\nFrom Gautschi\u2019s Inequality\n\npoe T@+1)\n\nwith s = 1/2 and x = d/2 we have:\n\n(4 ye . ree) . (\u201c)\n\n2\nr ela rad\n\n<(xts)*% 2>0,0<s<1\n\nTherefore, it suffices to have\n\n(40)\n\n(41)\n\n(42)\n\n(43)\n\nNote that this upper bound is tight when 6 \u2014 0 and d > +x, since all inequality involved asymp-\nVa(1) T'(d/2 + 1/2)\nVan) ~\" V(d/2+1)\nIA\n\n20\nd+1\n\nIlw\n\na\nNote that this upper bound is tight when 6 + 0 and d + +00, since all inequality involved asymp-\ntotically becomes equal. |\ncos 0\nos *\n\ncos @\n\n(2? + (K ~ jy)?\nax\nay\na? (xy + (K \u2014 2)y\")\nwe have the following relations in the triangular region 0, = {(x,y):\u00ab >0,y > 0,4 >y+eo}\n(Fig.|6lc)):\nProof Propositions (1) and (2) are computed by direct calculations. In particular, note that since\ncos = ax = 1/\\/1+(K \u2014 1)(y/x)? and x > y > 0, we have cos 6 \u20ac (1/WK, 1] and @ \u20ac [0, 60).\nFor Preposition (3), @* = arccosay > @ = arccos ax because a > y. Finally, for \u00ab > y > 0, we\nhave\nTheorem 7.6 For the dynamics defined in Eqn.|16| there exists \u20ac) > 0 so that the trianglar region\nQe, = {(a,y) 2 > 0,y > 0,4 > y + \u20ac0} (Fig. [6fc)) is a convergent region. That is, the flow goes\ninwards for all three edges and any trajectory starting in Q,,, stays.\nCase 2: x = 1,0 < y < 1, vertical line. In this case, a < 1 and the x component of the dynamic\n\u2018Qe\n\u2014(r \u2014 6)(K \u2014 ljy\u2014 0+ (K \u2014 1)(asin \u00a2* \u2014 sind) + asi\n\u2014(K \u20141)[(a \u2014 \u00a2)y \u2014 (asin \u00a2* \u2014 sin \u00a2)| + (asin 6 \u2014 6)\no* \u20140\u2014e\u00a2+ |[(K \u2014 1)(asin \u00a2* \u2014 sind) + asin 6] \u20ac\n\ne(K \u20141) [asin - (1+ aK v) sino]\n\n(56)\n\nca(k -1) [Y= al (1+ aK 7) yaaa are]\nLemma 7.7 (Reparametrization) Denote \u00ab = x \u2014 y > 0. The terms ax, ay and ae involved in\nthe trigometric functions in Eqn.|16\\has the following parameterization:\nProof This transformation can be checked by simple algebraic manipulation. For example:\n(x \u2014 by tale \u2014y 2a a yp \u2014avV/1 = a2y?\n\ny(t 6-aV2=ar(@\u2014yP| +0 [ey2\u2014 (a \u2014 yP - V1 \u2014 ay?\nm= b\u2014a/2\u2014 aay > 7-5 V2>0\n1 1 1\nJVialy2+(K-1l) J +e/y)?+(K- DVR\n\nay\n8=cos0+VK \u20141sin0\nTo prove Eqn. 59] first we notice that K cos? = Kax = 8 + (K \u2014 1). Therefore, we have\n(K cos \u2014 B)* \u2014 (K \u2014 1)?63 = 0, which gives 6? \u2014 28 cos@ + 1\u2014 K sin? @ = 0. Solving this\nquadratic equation and notice that 8 > 1, 6 \u20ac [0, 7/2] and we get:\n3 = h,(8) \u2014(\u00a24+ (K \u20141)sin d)e\nDenote f3(8,\u00a2\u20ac\u2019) = f31 + f32 where\nfa1(5,\u20ac\u2019) o \u20140-\u00a2\u00a2+easind\nfzo(8,\u20ac) = (K \u20141)(asin\u00a2* \u2014 sind)\nfa = \u20ac(* \u2014\n* \u2014 4) + (1-\u20ac')(\u00a2* \u2014 0) -e\nE \u20140)\u2014\u20ac6+ Bosind > \u2014'0\n> -'0+ Basin > Bo (sino \u2014<\n3)\n1\nf33(0) = Bsin@ \u2014 6 3 sin 20 + VK \u2014 Isin\u00ae 0 \u2014\u00a2\nB-a B= Bae | B=Bo/e'\nrau rau\n\ne-1+Ky * (0+ 8 2)\n\na\n((K \u2014 1)(asin \u00a2* \u2014 sing) + asin) = -2(\u00a2\" \u20140)+6\n27\n\nAy = ~(n~6)(e-1+Ky) \u2014(6\" \u2014 \u00a2) \u2014 dy + ((K ~1)(asin 6\" ~ sing) + asind\n\n~(#~ d)(e= 14 Ky) ~ (6\" ~ 6) \u2014 (6 Oy <0 (\n8 = cos@ + Vcos? 6 + K sin? 6 \u2014 1 =cos@ + VK \u2014 Isin\u00e9\nWhen \u00a3 is fixed, {3 now is a monotonously decreasing function with respect to \u00ab > 0. Therefore,\nf3(B,\u20ac) > f3(8,\u20ac') for 0 < \u20ac < \u00e9\u2019 = Bo/8. If we could prove f3(Z,\u00a2\u2019) > 0 and only attain zero at\nknown critical point (8, \u00a2) = (1,1), the proof is complete.\nFor f32 it suffices to prove that \u20ac\u2019(a sin \u00a2* \u2014 sin d) = $2 sin \u00a2* \u2014 2 sing > 0, which is equivalent\nto sin d* \u2014 sing/B > 0. But this is trivially true since \u00a2* > \u00a2 and 6 > 1. Therefore, f32 > 0.\nNote that the equality only holds when \u00a2* = \u00a2 and 6 = 1, which corresponds to the horizontal line\nx \u20ac (0.1),y=0.\nProof We have Lyaponov function V = E[E] so that V = \u2014-E[AwTAw] < \u2014E[Aw]' E[Aw] <\n0. By Thm. other than the optimal solution w*, there is no other symmetric critical point.\nAw # 0 and thus V < 0. On the other hand, by Thm [7.6] the triangular region 2, is convergent, ir\nwhich the 2D dynamics is C\u00ae differentiable. Therefore, any 2D solution curve \u20ac(t) will stay within\nBy PoincareBendixson theorem, when there is a unique critical point, the curve either converges to <\nlimit circle or the critical point. However, limit cycle is not possible since V is strictly monotonous\n\ndecreasing along the curve. Therefore, \u20ac(t) will converge to the unique critical point, which i:\n(y, \u20ac) = (1.0) and so does the symmetric system (Ban IOP\nwhere v= \u2014= (VK \u20141-arccos(1/V kK) + 7). Furthermore, x, is a convergent critical point.\nProof The 1D system can be computed with simple algebraic manipulations (note that when x = y,\n\u00a2 = Oand 6 = \u00a2* = arccos(1/VK)). Note that the 1D system is linear and its close form solution\nis x) = rp + Ce~*/2N* and thus convergent.\nTheorem 7.10 Any trajectory in 0... converges to (y,\u20ac) = (1,0), following the dynamics defined\n\nin Eqn.\n2\nvac = \u20141K(a\u2014 2x)"}]
Skvgqgqxe
[{"section_index": "0", "section_name": "LEARNING TO COMPOSE WORDS INTO SENTENCES\nWITH REINFORCEMENT LEARNING", "section_text": "Yani Yogatama!, Phil Blunsom\u2019-?, Chris Dyer!, Edward Grefenstette', and Wang Ling!\nDeepMind and 2University of Oxford\n\u2018dyogatama, pblunsom, cdyer,etg, lingwang}@google -com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Languages ot oy in terms of hierarchical, nested structures on sequences of!\nwords {Chomsky} (Chomsky) However, the degree to which neural network architectures that com-\npute s Chomsky 0 Cr he meaning of sentences for practical applications should explicitly reflect\nsuch structures is a matter for debate. In this work, we use reinforcement learning to learn to con-\nstruct trees for computing sentence representations, guided by feedback from downstream tasks tha\ndepend on these representations. The space of structures that are considered by the learner includes\nboth fully sequential structures (corresponding to traditional recurrent neural network \u201cencoders\u201d.\nas well as all projective binary trees. Thus, although we take seriously the notion that good compo-\nsitional architectures might be tree-structured, we specify neither the form of the tree nor whether <\ntree is necessary at all, and instead leave those decisions up to the learner (and the data).\nTo place this work in context, there are three predominant approaches for constructing vector rep-\nresentations of sentences from a sequence of words. The first composes words sequentially using\na recurrent neural network, treating the RNN\u2019s final hidden state as the representation of the sen-\ntence (Cho et al} 2014} Sutskever et al] 2014} Kiros et al] 2015p. In such models, there is no explicit\nhierarchical organization imposed on the words, and the RNN\u2019s dynamics must learn to simulate it.\nThe second approach uses tree-structured networks to recursively compose representations of words\nand phrases to form representations of larger phrases and, finally, the complete sentence. In con-\ntrast to sequential models, these models\u2019 architectures are organized according to each sentence\u2019s\nsyntactic structure, that is, the hierarchical organization of words into nested phrases that charac-\nterizes human intuitions about how words combine to form grammatical sentences. Prior work on\ntree-structured models has assumed that trees are either provided together with the input sentences\n2008} r5}\nor that they are predicted based on explicit treebank annotations jointly with the downstream task\n(Bowman et al. 2016} Dyer et al.| 2016). The last approach for constructing sentence representa-\ntions uses convolutional neural networks to produce the representation in a bottom up manner, either\nwith syntactic information (Ma et al.|/2015) or without (Kim}|2014}{Kalchbrenner et al.|{2014).\nOur work can be understood as a compromise between the first two approaches. Rather than usin;\nexplicit supervision of tree structure, we use reinforcement learning to learn tree structures (anc\nthus, sentence-specific compositional architectures), taking performance on a downstream task tha\nuses the computed sentence representation as the reward signal. In contrast to sequential RNNs\nwhich ignore tree structure, our model still generates a latent tree for each sentence and uses it tc"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We use reinforcement learning to learn tree-structured neural networks for com.\nyuting representations of natural language sentences. In contrast with prior work\nyn tree-structured models, in which the trees are either provided as input or pre-\nlicted using supervision from explicit treebank annotations, the tree structure:\nn this work are optimized to improve performance on a downstream task. Ex.\nyeriments demonstrate the benefit of learning task-specific composition orders\nyutperforming both sequential encoders and recursive encoders based on treebank\ninnotations. We analyze the induced trees and show that while they discover\nome linguistically intuitive structures (e.g., noun phrases, simple verb phrases)\nhey are different than conventional English syntactic structures.\nstructure the composition. Our hypothesis is that encouraging the model to learn tree-structured\ncompositions will bias the model toward better generalizations about how words compose to form\nsentence meanings, leading to better performance on downstream tasks.\nThis work is related to unsupervised grammar induction (Klein & Manning} |2004}/Blunsom & Cohn\n2010) Spitkovsky etal 2011} inter alia), which seeks to infer a generative grammar of an infinite\nlanguage from a finite sample of strings from the language\u2014but without any semantic feedback\nPrevious work on unsupervised grammar induction that incorporates semantic supervision involve:\ndesigning complex models for Combinatory Categorial Grammars (Zettlemoyer & Collins} (2005) 0\nmarginalizing over latent syntactic structures (Naradowsky et al.||2012). Since semantic feedback\nhas been proposed as crucial for the acquisition of syntax (Pinker]/1984), our model offers a simple\nalternative[!] However, our primary focus is on improving performance on the downstream model, sc\n\nthe learner may settle on a different solution than conventional English syntax. We thus also explore\nwhat kind of syntactic structures are derivable from shallow semantics.\nExperiments on various tasks (i.e., sentiment analysis, semantic relatedness, natural language infer\nence, and sentence generation) show that reinforcement learning is a promising direction to discove:\nhierarchical structures of sentences. Notably, representations learned this way outperformed bot!\nconventional left-to-right models and tree-structured models based on linguistic syntax in down\nstream applications. This is in line with prior work showing the value of learning tree structures 11\nstatistical machine translation models . Although the induced tree structures mani\nfested a number of linguistically intuitive structures (e.g., noun phrases, simple verb phrases), ther\nare a number of marked differences to conventional analyses of English sentences (e.g., an overal\nleft-branching structure)."}, {"section_index": "3", "section_name": "2 MODEL", "section_text": "Our model consists of two components: a sentence representation model and a reinforcement learn\ning algorithm to learn the tree structure that is used by the sentence representation model."}, {"section_index": "4", "section_name": "2.1 TREELSTM", "section_text": "Our sentence representation model follows the Stack-augmented Parser-Interpreter Neural Networ\n(SPINN; Bowman et al., 2016), SPINN is a shift-reduce parser that uses Long Short-Term Memor\n(LSTM; Hochreiter and Schmidhuber, 1997) as its composition function. Given an input sentenc\nof N words x = {x1,%2,...,2Nn}, we represent each word by its embedding vector x; \u20ac R?\nThe parser maintains an index pointer p starting from the leftmost word (p = 1) and a stack. T\nparse the sentence, it performs a sequence of operations a = {a}, a2,..., a2n\u20141}, where a, \u2018\n{SHIFT, REDUCE}. A SHIFT operation pushes x, to the stack and moves the pointer to the nex\nword (p44); while a REDUCE operation pops two elements from the stack, composes them to\nsingle element, and pushes it back to the stack. SPINN uses Tree LSTM Zhu et al\nas the REDUCE composition function, which we follow. In Tree LSTM, each element o\nstack is represented by two vectors, a hidden state representation h and a memory representation \u00ab\nTwo elements of the stack (h;,c;) and (h;,c;) are composed as:\ni= o(W,(h;, hj] + bz) 0 = 0(Wo[hi, hy] + br)\nfp =o(Wr,[hi,hj]+br,) \u2014 fr = 7 We, hi, hy] + beg)\ng = tanh(Wo[h;, hj] + ba) c=f,Oc+frOcj +iOs\nh=o0c\nwhere [h;,h;] denotes concatenation of h; and h;, and a is the sigmoid activation function\nA unique sequence of {SHIFT, REDUCE} operations corresponds to a unique binary parse tree of the\nsentence. A SHIFT operation introduces a new leaf node in the parse tree, while a REDUCE operation\ncombines two nodes by merging them into a constituent. See Figure[I] for an example. We note that\nfor a sentence of length N, there are exactly N SHIFT operations and N \u2014 1 REDUCE operations that\nare needed to produce a binary parse tree of the sentence. The final sentence representation produced\n\u201cOur model only produces an interpretation grammar that pz language instead of a generative grammar\n5h \\ / an\n\n2) (1)\n(3)\n\nYY\nSSSRRSR. GSRSRSR ESS SRRR\nFigure 1: Four examples of trees and their corresponding SHIFT (S) and REDUCE (R) sequences. In\neach of the examples, there are 4 input words (4 leaf nodes), so 7 operations (4 S, 3 R) are needed\nto construct a valid tree. The nodes are labeled with the timesteps in which they are introduced to\nthe trees t \u20ac {1,...,7}. A SHIFT operation introduces a leaf node, whereas a REDUCE operation\n\nintroduces a non-leaf node by combining two previously introduced nodes. We can see that different\nS-R sequences lead to different tree structures.\nby the Tree LSTM is the hidden state of the final element of the stack hy_; (i.e., the topmost node\nof the tree).\nTracking LSTM. SPINN optionally augments Tree LSTM with another LSTM that incorporate:\ncontextual information in sequential order called tracking LSTM, which has been shown to improve\nperformance for textual entailment. It is a standard recurrent LSTM network that takes as input the\nhidden states of the top two elements of the stack and the embedding vector of the word indexed by\nthe pointer at timestep t. Every time a REDUCE operation is performed, the output of the tracking\nLSTM e is included as an additional input in Eq. [I] @e., the input to the REDUCE compositior\nfunction is [h;,h;, e] instead of {h;, h;])."}, {"section_index": "5", "section_name": ").2. REINFORCEMENT LEARNING", "section_text": "In previous work (Tai et al.]/2015}/Bowman et al.| 2016), the tree structures that guided compositior\n\norders of Tree LSTM models are given directly as input (i.e., ais observed and provided as an input)\nFormally, each training data is a triplet {x, a, y}. {Tai et al. 2015) consider models where a is alsc\ngiven at test time, whereas|Bowman et al.|(2016) explore models where a can be either observed 01\nnot at test time. When it is only observed during training, a policy is trained to predict a at test time\nNote that in this case the policy is trained to match explicit human annotations (i.e., Penn TreeBank\nannotations), so the model learns to optimize representations according to structures that follows\nhuman intuitions. They found that models that observe a at both training and test time are bette:\nthan models that only observe a during training.\nOur main idea is to use reinforcement learning (policy gradient methods) to discover the best tree\nstructures for the task that we are interested in. We do not place any kind of restrictions wher\nlearning these structures other than that they have to be valid binary parse trees, so it may resul\nin tree structures that match human linguistic intuition, heavily right or left branching, or othe:\nsolutions if they improve performance on the downstream task.\nWe parameterize each action a \u20ac {SHIFT, REDUCE} by a policy network (a | s; Wp), where s i\na representation of the current state and Wp is the parameter of the network. Specifically, we use <\ntwo-layer feedforward network that takes the hidden states of the top two elements of the stack h\nand h; and the embedding vector of the word indexed by the pointer x,, as its input:\nIf a is given as part of the training data, the policy network can be trained\u2014in a supervised trainin:\nregime\u2014to predict actions that result in trees that match human intuitions. Our training data, o\n\nthe other hand, is a tuple {x,y}. We use REINFORCE (Williams|{1992), which is an instance of\nods, to learn Wp\n\nbroader class of algorithms called policy gradient meth such that the sequence o\nactions a = {a1,...,a7} maximizes:\nMs\n\nR(W) = Ex(a,s;Wr)\n\nva 5\n1\n\nt\nwhere 1; is the reward at timestep t. We use performance on a downstream task as the reward func-\ntion. For example, if we are interested in using the learned sentence representations in a classification\ntask, our reward function is the probability of predicting the correct label using a sentence represen-\ntation composed in the order given by the sequence of actions sampled from the policy network, so\nR(W) = log p(y | T-LSTM(x); W), where we use W to denote all model parameters (Tree LSTM,\npolicy network, and classifier parameters), y is the correct label for input sentence x, and x is rep-\nresented by the Tree LSTM structure in For a natural language generation task where the goal\nis to predict the next sentence given the current sentence, we can use the probability of predicting\nwords in the next sentence as the reward function, so R(W) = log p(xs41 | T-LSTM(x,); W).\nNote that in our setup, we do not immediately receive a reward after performing an action at timeste]\nt. The reward is only observed at the end after we finish creating a representation for the curren\nsentence with Tree LSTM and use the resulting representation for the downstream task. At eacl\ntimestep t, we sample a valid action according to 7(a | s; Wp). We add two simple constraints t\nmake the sequence of actions result in a valid tree: REDUCE is forbidden if there are fewer than tw\nelements on the stack, and SHIFT is forbidden if there are no more words to read from the sentence\nAfter reaching timestep 2V \u2014 1, we construct the final representation and receive a reward that i\nused to update our model parameters.\nWe experiment with two learning methods: unsupervised structures and semi-supervised structures\nSuppose that we are interested in a classification task. In the unsupervised case, the objective func\ntion that we maximize is log p(y | T-LSTM(x);W). In the semi-supervised case, the objectiv\nfunction for the first E\u2019 epochs also includes a reward term for predicting the correct SHIFT or RE\nDUCE actions obtained from an external parser\u2014in addition to performance on the downstream task\nso we maximize log p(y | T-LSTM(x); W) + log z(a | s; Wz). The motivation behind this mode\nis to first guide the model to howe tree structures that match human intuitions, before letting |\nexplore other structures close to these ones. After epoch E, we remove the second term from our ob\njective function and continue maximizing the first term. Note that unsupervised and semi-supervise\nhere refer to the tree structures. not the nature of the downstream task."}, {"section_index": "6", "section_name": "3.1 BASELINES", "section_text": "The goal of our experiments is to evaluate our hypothesis that we can discover useful task-specific\ntree structures (composition orders) with reinforcement learning. We compare the following com-\nposition methods (the last two are unique to our work):\n?We choose to include right to left as a baseline since a right-branching tree structure\u2014which is the output of\naright to left composition order\u2014has been shown to be a reliable baseline for unsupervised grammar induction\n\n(Klein & Manning\nRight to left: words are composed from right to left?\n\nLeft to right: words are composed from left to right. This is the standard recurrent neural\nnetwork composition order.\n\nBidirectional: A bidirectional right to left and left to right models, where the final sentence\nembedding is an average of sentence embeddings produced by each of these models.\n\nBalanced binary tree: words are composed according to a balanced binary parse tree of\nthe sentence.\n\nSupervised syntax: words are composed according to a predefined parse tree of the sen-\ntence. When parse tree information is not included in the dataset, we use Stanford parser\n\n(Klein & Manning}|2003) to parse the corpus.\n\nSemi-supervised syntax: a variant of our reinforcement learning method, where for the\nfirst E epochs we include rewards for predicting predefined parse trees given in the super-\nvised model, before letting the model explore other kind of tree structures at later epochs\n(i.e., semi-supervised structures in\n\nLatent syntax: another variant of our reinforcement learning method where there is no\npredefined structures given to the model at all (i.e.. unsupervised structures in 42.2).\nFor learning, we use stochastic gradient descent with minibatches of size 1 and (2 regularization con-\nstant tune on development data from {10~+, 10\u00b0, 10~\u00b0, 0}. We use performance on development\ndata to choose the best model and decide when to stop training.\nTable 1: Descriptive statistics of datasets used in our experiments.\nStanford Sentiment Treebank. We evaluate our model on a sentiment classification task from the\nStanford Sentiment Treebank (Socher et al.||2 . We use the binary classification task where the\ngoal is to predict whether a sentence is a positive or a negative movie review.\nWe set the word embedding size to 100 and initialize them with Glove vectors (Pennington et al.\n\nFor each sentence, we create a 100-dimensional sentence representations \u20ac R*\u201d witl\nTree LSTM, project it to a 200-dimensional vector and apply ReLU: q = ReLU(W,s + b,), anc\ncompute p(7 = c| q: w,) \u00ab exp(w,..q + 0,).\nTable 2: Classification accuracy on Stanford Sentiment Treebank dataset. The number of parameter:\nincludes word embedding parameters and is our approximation when not reported in previous work\nModel | Acc. | # params.\n100D-Right to left | 83.9 1.2m\n100D-Left to right | 84.7 1.2m\n\n100D-Bidirectional | 84.7 1.5m\n\n100D-Balanced binary tree | 85.1 1.2m\n\n100D-Supervised syntax | 85.3 1.2m\n\n100D-Semi-supervised syntax | 86.1 1.2m\n\n100D-Latent syntax | 86.5 1.2m\n\nRNTN scone 85.4 -\n\nDCNN 86.8 -\n\nz 82.7 -\n\nCNN. word2vec (Kim) {2014| 87.2 -\n\nCNN- multichannel (Kim /2014} 88.1 -\n\nNSE (Munkhdalai & Yu [ 89.7 5.4m\n\nNTI-SLSTM (\u00a2 87.8 44m\nNTI-SLSTM-LSTM 89.3 4.8m\nLeft to Ri | f2015| 84.9 2.8m\nBidirectional LSTM 20T5| 87.5 2.8m\nConstituency Tree-LSTM-random (Tai et al.||2015| 82.0 2.8m\nConstituency Tree-LSTM-GloVe (Tai et al. |/2015| 88.0 2.8m\nDependency Tree-LSTM (Tar et al.|/2015} 85.7 2.8m\n\u2018http ://nlp.stanford.edu/projects/glove/\nWe evaluate our method on four sentence representation tasks: sentiment classification, semantic\nrelatedness, natural language inference (entailment), and sentence generation. We show statistics of\nthe datasets in Table[I]and describe each task in detail in this subsection.\nDataset | # of train | #of dev | #of test | Vocab size\nSICK 4,500 500 4,927 2,172\nSNLI 550,152 | 10,000 10,000 18,461\nSST 98,794 872 1,821 8,201\nIMDB 441,617 | 223,235 | 223,236 29,209\nWe run each model 3 times (corresponding to 3 different initialization points) and use the devel-\nopment data to pick the best model. We show the results in Table [2] Our results agree with prior\nwork that have shown the benefits of using syntactic parse tree information on this dataset (i.e., su-\npervised recursive model is generally better than sequential models). The best model is the latent\nsyntax model, which is also competitive with results from other work on this dataset. Both the latent\nand semi-supervised syntax models outperform models with predefined structures, demonstrating\nthe benefit of learning task-specific composition orders.\nSemantic relatedness. The second task is to predict the degree of relatedness of two sentences\nfrom the Sentences Involving Compositional Knowledge corpus (SICK; Marelli et al., 2014) . In\nthis dataset, each pair of sentences are given a relatedness score on a 5-point rating scale. For each\nsentence, we use Tree LSTM to create its representations. We denote the final representations by\n{s1,s2} \u20ac R!\u00b0. We construct our prediction by computing: u = (so \u2014 s1)?, v = 81 \u00a9 839,\nq = ReLU(W,[u, v] + bp), and 7 = wi q+ bg, where Wy \u20ac R7\u00b0*?0 by \u20ac R?\u00b0 wy \u20ac\nIR? b, \u20ac R! are model parameters, and [u, v] denotes concatenation of vectors inside the brackets.\nWe learn the model to minimize mean squared error.\nWe run each model 5 times and use the development data to pick the best model. Our results are\nshown in Table 3] Similarly to the previous task, they clearly demonstrate that learning the tree\nstructures yields better performance.\nWe also provide results from other work on this dataset for comparisons. Some of these models (La\ndesigned specifically for this task. Our Tree LSTM implementation performs competitively wit\nmost models in terms of mean squared error. Our best model\u2014semi-supervised syntax\u2014is bette\nthan most models except LSTM models of (2015) which were trained with a differen\nobjective function[*] Nonetheless, we observe the same trends with their results that show the benefi\nof using syntactic information on this dataset.\nTable 3: Mean squared error on SICK dataset.\nStanford Natural Language Inference. We next evaluate our model for natural language infer\nence (i.e., recognizing textual entailment) using the Stanford Natural Language Inference corpu:\n(SNLI; Bowman et al., 2015) . Natural language inference aims to predict whether two sentence:\nare entailment, contradiction, or neutral, which can be formulated as a three-way classification prob:\nlem. Given a pair of sentences, similar to the previous task, we use Tree LSTM to create sentenc\u00a2\nrepresentations {s;,s2} \u20ac R!\u00b0 for each of the sentences. Following (2 , we con:\nstruct our prediction by computing: u = (s2 si)\u2019, V = 81082, q = ReLU(W,|[u, v, $1, 82] +b,)\nand p(\u00a7 = c | q; wa) \u00ab exp(Wy,-q + bq), where W, \u20ac R200*409 ph, \u20ac R290 wy \u20ac Rb, ER\nare model parameters. The objective function that we maximize is the log likelihood of the correc\nlabel under the models.\nWe show the results in Table The latent syntax method performs the best. Interestingly, the\nsequential left to right model is better than the supervised recursive model in our experiments, which\ncontradicts results from{Bowman et al.|(2016) that show 300D-LSTM is worse than 300D-SPINN.\nA possible explanation is that our left to right model has identical number of parameters with the\nsupervised model due to the inclusion of the tracking LSTM even in the left to right model (the\n\nonly difference is in the composition order), whereas the models in Bowman et al.] have\n4Our experiments with the regularized KL-divergence objective function do not result ir\nsignificant improvements, so we choose to report results with the simpler mean squared error objective function.\nModel | MSE | # params.\n100D-Right to left | 0.461 1.0m\n100D-Left to right | 0.394 1.0m\n\n100D-Bidirectional | 0.373 1.3m\n100D-Balanced binary tree | 0.455 1.0m\n100D-Supervised syntax | 0.381 1.0m\n100D-Semi-supervised syntax | 0.320 1.0m\n100D- Latent syntax | 0.359 1.0m\nTilinois-LH pola 2014) ] 0.369\nUNAL-NLP( (2014) | 0.356\nMeaning Factory (B a eon 0.322\nDT-RNN (Socher et al\u2019] \u00bb | 0.382 -\nMean Vectors (Tai et al. 0.456 650k\nLeft to Right LSTM 0.283 1.0m\nBidirectional LSTM (Tai et al-||2015) | 0.274 1.0m\nConstituency Tree-LSTM 0.273 1.0m\nDependency Tree-LSTM (Tat et al.|/2015) | 0.253 1.0m\ndifferent number of parameters. Due to the poor performance of the supervised model relative\nto the unsupervised model, semi-supervised training can only mitigate the loss in accuracy, rathe:\nthan improve over unsupervised learning. Our models underperform state-of-the-art models on thi:\ndataset that have almost four times the number of parameters. We only experiment with smalle:\nmodels since tree-based models with dynamic structures (e.g., our semi-supervised and latent synta>\nmodels) take longer to train. See \u00a74]for details and discussions about training time.\nTable 4: Classification accuracy on SNLI dataset.\nSentence generation. The last task that we consider is natural language generation. Given a sen\ntence, the goal is to maximize the probability of generating words in the following sentence. This i\na similar setup to the Skip Thought objective (Kiros et al.||2015), except that we do not generate th\nprevious sentence as well. Given a sentence, we encode it with Tree LSTM to obtain s \u20ac R!\u00b0\u00b0. W\nuse a bag-of-words model as our decoder, so p(w; | s;V) x exp(v;'s), where V \u20ac IR100%29,20\nand v; \u20ac R?!\u00b0 is the i-th column of V. Using a bag-of-words decoder as opposed to a recurren\nneural network decoder increases the importance of producing a better representation of the curren\nsentence, since the model cannot rely on a sophisticated decoder with a language model componen\nto predict better. This also greatly speeds up our training time.\nTable 5: Word perplexity on the sentence generation task. We also show perplexity of the mode\nthat does not condition on the previous sentence (unconditional) when generating bags of words fo\ncomparison.\nModel | Acc. | # params.\n100D-Right to left | 79.1 2.3m\n100D-Left to right | 80.2 2.3m\n\n100D-Bidirectional | 80.2 2.6m\n\n100D-Balanced binary tree | 77.4 2.3m\n100D-Supervised syntax | 78.5 2.3m\n100D-Semi-supervised syntax | 80.2 2.3m\n100D-Latent syntax | 80.5 2.3m\n\n100D-LSTM 12015) | 77.6 5.7m\n300D-LSTM (B (2016) | 80.6 8.5m\n300D-SPINN (vena 2016) | 83.2 9.2m\n1024D-GRU (Vendrov et al.| 81.4 15.0m\n82.1 9m\n\n83.4 9.5m\n\n84.6 8.5m\nWe use IMDB movie review corpus for this experiment, The corpus consists\nof 280,593, 33,793, and 34,029 reviews in am Tuclopment and test sets respectively. We\nconstruct our data using the development and test sets of this corpus. For training, we process\n33,793 reviews from the original development set to get 441,617 pairs of sentences. For testing,\nwe use 34,029 reviews in the test set (446,471 pairs of sentences). Half of these pairs is used as\nour development set to tune hyperparamaters, and the remaining half is used as our final test set.\nOur results in Table[5|further demonstrate that methods that learn tree structures perform better than\nmethods that have fixed structures.\nModel | Perplexity | # params.\n\n100D-Unconditional 105.6 30k\n100D-Right to left 101.4 6m\n\n100D-Left to right 101.1 6m\n100D-Bidirectional 100.2 6.2m\n100D-Balanced binary tree 103.3 6.2m\n100D-Supervised syntax 100.8 6m\n100D-Semi-supervised syntax 98.4 6m\n100D-Latent syntax 99.0 6m\nFigure 2: Examples of tree structures learned by our model which show that the model discovers\nsimple concepts such as noun phrases and verb phrases.\nQ\n\noY a \\\nFigure 3: Examples of unconventional tree structures."}, {"section_index": "7", "section_name": "4 DISCUSSION", "section_text": "Learned Structures. Our results in 43}show that our proposed method outperforms competing\nmethods with predefined composition order on all tasks. The right to left model tends to perform\nworse than the left to right model. This suggests that the left to right composition order, similar to\nhow human reads in practice, is better for neural network models. Our latent syntax method is able\nto discover tree structures that work reasonably well on all tasks, regardless of whether the task is\nbetter suited for a left to right or supervised syntax composition order.\nWe inspect what kind of structures the latent syntax model learned and how closely they match\nhuman intuitions. We first compute unlabeled bracketing F scores] for the learned structures and\nparses given by Stanford parser on SNLI and Stanford Sentiment Treebank. In the SNLI dataset,\nthere are 10,000 pairs of test sentences (20,000 sentences in total), while the Stanford Sentiment\nTreebank test set contains 1,821 test sentences. The F) scores for the two datasets are 41.73 and\n40.51 respectively. For comparisons, F\\ scores of a right (left) branching tree are 19.94 (41.37) for\nSNLI and 12.96 (38.56) for SST.\nWe also manually inspect the learned structures. We observe that in SNLJ, the trees exhibit overall\nleft-branching structure, which explains why the F\u2019 scores are closer to a left branching tree struc-\nture. Note that in our experiments on this corpus, the supervised syntax model does not perform\nas well as the left-to-right model, which suggests why the latent syntax model tends to converge\ntowards the left-to-right model. We handpicked two examples of trees learned by our model and\nshow them in Figure}2| We can see that in some cases the model is able to discover concepts such as\nnoun phrases (e.g., a boy, his sleds) and simple verb phrases (e.g., wearing sunglasses, is frowning).\nOf course, the model sometimes settles on structures that make little sense to humans. We show two\nsuch examples in Figure[3] where the model chooses to compose playing frisbee in and outside a as\nphrases.\nTraining Time. A major limitation of our proposed model is that it takes much longer to train\ncompared to models with predefined structures. We observe that our models only outperforms mod-\nels with fixed structures after several training epochs; and on some datasets such as SNLI or IMDB,\nan epoch could take a 5-7 hours (we use batch size | since the computation graph needs to be recon-\nstructed for every example at every iteration depending on the samples from the policy network).\nThis is also the main reason that we could only use smaller 100-dimensional Tree LSTM models in\nall our experiments. While for smaller datasets such as SICK the overall training time is approxi-\nmately 6 hours, for SNLI or IMDB it takes 3-4 days for the model to reach convergence. In general,\nthe latent syntax model and semi-supervised syntax models take about two or three times longer to\nconverge compared to models with predefined structures."}, {"section_index": "8", "section_name": "5 CONCLUSION", "section_text": "We presented a reinforcement learning method to learn hierarchical structures of natural language\nsentences. We demonstrated the benefit of learning task-specific composition order on four tasks:\nsentiment analysis, semantic relatedness, natural language inference, and sentence generation. We\nqualitatively and quantitatively analyzed the induced trees and showed that they both incorporate\nsome linguistically intuitive structures (e.g., noun phrases, simple verb phrases) and are different\nthan conventional English syntactic structures."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large anno-\ntated corpus for learning natural language inference. In Proc. of EMNLP., 2015.\nDavid Chiang. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228.\n2007.\nNoam Chomsky. Syntactic Structures. Mouton, 1957\nEdward Grefenstette and Mehrnoosh Sadrzadeh. Experimental support for a categorical composi-\ntional distributional model of meaning. In Proc. of EMNLP, 2011.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):\n1735-1780, 1997.\nSergio Jimenez, George Duenas, Julia Baquero, Alexander Gelbukh, Av Juan Dios Batiz, and\nAv Mendizabal. UNAL-NLP: Combining soft cardinality features for semantic textual similarity,\nrelatedness and entailment. In Proc. of SemEval, 2014.\nNal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for\nmodelling sentences. In Prof. of ACL, 2014.\nStephen Clark, Bob Coecke, and Mehrnoosh Sadrzadeh. A compositional distributional model of\nmeaning. In Proc. of the Second Symposium on Quantum Interaction, 2008.\nYoon Kim. Convolutional neural networks for sentence classification. In Proc. EMNLP, 2014.\nDan Klein and Christopher D. Manning. Accurate unlexicalized parsing. In Proc. of ACL, 2003.\nAlice Lai and Julia Hockenmaier. Illinois-lh: A denotational and distributional approach to seman-\ntics. In Proc. of SemEval, 2014.\nMarco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto\nZamparelli. Evaluation of compositional distributional semantic models on full sentences through\nsemantic relatedness and textual entailment. In Proc. of SemEval, 2014.\nTsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. arXiv preprint, 2016a.\nJason Naradowsky, Sebastian Riedel, and David A. Smith. Improving nlp through marginalization\nof hidden syntactic structure. In Proc. of EMNLP, 2012.\nSteven Pinker. Language Learnability and Language Development. Harvard, 1984.\nRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher Manning, Andrew Ng, and\nChristopher Potts. Recursive deep models for semantic compositionality over a sentiment tree-\nbank. In Proc. of EMNLP, 2013.\nKai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representation:\nfrom tree-structured long short-term memory networks. In Proc. of ACL, 2015.\nIvan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images anc\nlanguage. In Proc. of ICLR, 2016.\nLuke S. Zettlemoyer and Michael Collins. Learning to map sentences to logical form: Structured\nclassification with probabilistic categorial grammars. In Proc. of UAI, 2005.\nXiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. Long short-term memory over recursive struc-\ntures. In Proc. of ICML, 2015.\nDan Klein and Christopher D. Manning. Corpus-based induction of syntactic structure: Models of\ndependency and constituency. In Proc. of ACL. 2004.\nIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks.\nIn Proc. NIPS, 2014."}]
BymIbLKgl
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "The discussion on invariance is a strong component of the solutions to many classical problems in\nnumerical differential geometry. A typical example is that of planar shape analysis where one desires\nto have a local function of the contour which is invariant to rotations, translations and reflections\nlike the Euclidean curvature. This representation can be used to obtain correspondence between\nthe shapes and also to compare and classify them. However, the numerical construction of such\nfunctions from discrete sampled data is non-trivial and requires robust numerical techniques for\ntheir stable and efficient computation.\nConvolutional neural networks have been very successful in recent years in solving problems in\nimage processing, recognition and classification. Efficient architectures have been studied and de-\nveloped to extract semantic features from images invariant to a certain class or category of transfor-\nmations. Coupled with efficient optimization routines and more importantly, a large amount of data,\na convolutional neural network can be trained to construct invariant representations and semanti-\ncally significant features of images as well as other types of data such as speech and language. It\nis widely acknowledged that such networks have superior representational power compared to more\nprincipled methods with more handcrafted features such as wavelets, Fourier methods, kernels ete.\nwhich are not optimal for more semantic data processing tasks.\nIn Section 2}we begin by giving a brief summary of the theory and history of invariant curve repre-\nsentations. In Section/3]we explain our main contribution of casting the problem into the form which"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "We propose a metric learning framework for the construction of invariant geo-\nmetric functions of planar curves for the Euclidean and Similarity group of trans-\nformations. We leverage on the representational power of convolutional neural\nnetworks to compute these geometric quantities. In comparison with axiomatic\nconstructions, we show that the invariants approximated by the learning archi-\ntectures have better numerical qualities such as robustness to noise, resiliency to\nsampling, as well as the ability to adapt to occlusion and partiality. Finally, we de-\nvelop a novel multi-scale representation in a similarity metric learning paradigm.\nIn this paper we connect two seemingly different fields: convolutional neural network based metric\nlearning methods and numerical differential geometry. The results we present are the outcome of\ninvestigating the question: \u201dCan metric learning methods be used to construct invariant geometric\nquantities?\u201d By training with a Siamese configuration involving only positive and negative examples\nof Euclidean transformations, we show that the network is able to train for an invariant geometric\nfunction of the curve which can be contrasted with a theoretical quantity: Euclidean curvature. An\nexample of each can be seen Figure[I] We compare the learned invariant functions with axiomatic\ncounterparts and provide a discussion on their relationship. Analogous to principled constructions\nlike curvature-scale space methods and integral invariants, we develop a multi-scale representation\nusing a data-dependent learning based approach. We show that network models are able to con-\nstruct geometric invariants that are numerically more stable and robust than these more principled\napproaches. We contrast the computational work-flow of a typical numerical geometry pipeline with\nthat of the convolutional neural network model and develop a relationship among them highlighting\nimportant geometric ideas.\nFigure 1: Comparing the axiomatic and learned invariants of a curve.\nAn invariant representation of a curve is the set of signature functions assigned to every point of\nthe curve which does not change despite the action of a certain type of transformation. A powerful\ntheorem from E. Cartan ) and Sophus Lie (Ackerman|| ) characterizes the space\nof these invariant signatures. It begins with the concept of arc-length which is a generalized notion\nof the length along a curve. Given a type of transformation, one can construct an intrinsic arc-\nlength that is independent of the parameterization of the curve, and compute the curvature with\nrespect to this arc-length. The fundamental invariants of the curve, known as differential invariants\n@Bruckstein & Netravali] (1995), are the set of functions comprising of the\ncurvature and its successive derivatives with respect to the invariant arc-length. These differential\ninvariants are unique in a sense that two curves are related by the group transformation if and only\nif their differential invariant signatures are identical. Moreover, every invariant of the curve is a\n\nCe oo oe oo Fan a] - .\n'P 'P\ns(p) = Cyl dp = | xz + yp dp,\nw= [iclav= [i Vara\nK(p) det(Cp;Cpp) _ pYpp \u2014 Ypt pp\n3\n\\C,|* (a2 + y2)2\nThe difficulty with differential invariants is their stable numerical computation. Equations |1] and\n2 involve non-linear functions of derivatives of the curve and this poses serious numerical issues\nfor their practical implementation where noise and poor sampling techniques are involved. Apart\nfrom methods like[Pajdla & Van Gool] {1995p (1993), numerical considerations motivated\nthe development of multi-scale representations. These methods used alternative constructions of\ninvariant signatures which were robust to noise. More importantly, they allowed a hierarchical rep-\nresentation, in which the strongest and the most global components of variation in the contour of the\ncurve are encoded in signatures of higher scale, and as we go lower, the more localized and rapid\nchanges get injected into the representation. Two principal methods in this category are scale-space\n\nmethods and integral invariants. In scale-space methods (Mokhtarian & Mackworth| (1992)\n{& Tannenbaum|{1995); (1996), the curve is subjected to an invariant evolution pi\n\ncess where it can be evolved to different levels of abstraction. See Figure/5] The curvature function\nenables training a convolutional neural network for generating invariant signatures to the Euclidean\nand Similarity group transformations. Section |4] provides a discussion on developing a multi-scale\nrepresentation followed by the experiments and discussion in Section|5]\nThus, we have the Euclidean differential invariant signatures given by the set {x, Ks , ...} for\nevery point on the curve. Cartan\u2019s theorem provides an axiomatic construction of invariant signatures\nand the uniqueness property of the theorem guarantees their theoretical validity. Their importance is\nhighlighted from the fact that any invariant is a function of the fundamental differential invariants.\nI: A \u20ac {0,1}\n\nCurvel: C} Curve2: Cy\n\n\\\n\nOutput2: S@(C2)\nat each evolved time \u00a2 is then recorded as an invariant. For example, {#(s, t), (5, t), Kss(s,t)...\nwould be the Euclidean-invariant representations at scale t.\nIntegral invariants (Manay et al.|(2004);|Fidler et al.|(2008);|Pottmann et al ;/Hong & Soatto\n(2015)) are invariant signatures which compute integral measures along the curve. For example, for\n\neach point on the contour, the integral area invariant computes the area of the region obtained from\nthe intersection of a ball of radius r placed at that point and the interior of the contour. The integral\nnature of the computation gives the signature robustness to noise and by adjusting different radii of\nthe ball r one can associate a scale-space of responses for this invariant. and\nprovide a detailed treatise on different types of integral invariants and their\n\nproperties.\nIt is easy to observe that differential and integral invariants can be thought of as being obtained\nfrom non-linear operations of convolution filters. The construction of differential invariants employ\nfilters for which the action is equivalent to numerical differentiation (high pass filtering) whereas\nintegral invariants use filters which act like numerical integrators (low pass filtering) for stabilizing\nthe invariant. This provides a motivation to adopt a learning based approach and we demonstrate\nthat the process of estimating these filters and functions can be outsourced to a learning framework.\nWe use the Siamese configuration for implementing this idea. Such configurations have been used\n\n:\n\n)), metric learning (\n\n) and also for\n(2015)). In these papers, the goal was to learn the descriptor and hence the similarity metric from\ndata using notions of only positive and negative examples. We use the same framework for estima-\ntion of geometric invariants. However, in contrast to these methods, we contribute an analysis of\nthe output descriptor and provide a geometric context to the learning process. The contrastive loss\nfunction driving the training ensures that the network chooses filters which push and pull different\nfeatures of the curve into the invariant by balancing a mix of robustness and fidelity.\nA planar curve can be represented either explicitly by sampling points on the curve or using an\nimplicit representation such as level sets ( )). We work with an explicit representa-\ntion of simple curves (open or closed) with random variable sampling of the points along the curve.\nThus, every curve isa N x 2 array denoting the X and Y coordinates of the N points. We\nbuild a convolutional neural network which inputs a curve and outputs a representation or signature\nfor every point on the curve. We can interpret this architecture as an algorithmic scheme of repre-\nsenting a function over the curve. However feeding in a single curve is insufficient and instead we\nrun this convolutional architecture in a Siamese configuration (Figure 2) that accepts a curve and a\ntransformed version (positive) of the curve or an unrelated curve (negative). By using two identica\ncopies of the same network sharing weights to process these two curves we are able to extract geo-\nmetric invariance by using a loss function to require that the two arms of the Siamese configuratior\nmust produce values that are minimally different for curves which are related by Euclidean transfor\nmations representing positive examples and maximally different for carefully constructed negative\nexamples. To fully enable training of our network we build a large dataset comprising of positive\nand negative examples of the relevant transformations from a database of curves. We choose tc\nminimize the contrastive loss between the two outputs of the Siamese network as this directs the\nnetwork architecture to model a function over the curve which is invariant to the transformation."}, {"section_index": "2", "section_name": "3.1 LOSS FUNCTION", "section_text": "where ju is a cross validated hyper-parameter known as margin which defines the metric threshold\nbeyond which negative examples are penalized."}, {"section_index": "3", "section_name": "3.2 ARCHITECTURE", "section_text": "The network inputs a N x 2 array representing the coordinates of N points along the curve. The\nsequential nature of the curves and the mostly 1 D-convolution operations can also be looked at from\nthe point of view of temporal signals using recurrent neural network architectures. Here however\nwe choose instead to use a multistage CNN pipeline. The network, given by one arm of the Siamese\nconfiguration, comprises of three stages that use layer units which are typically considered the basic\nbuilding blocks of modern CNN architectures. Each stage contains two sequential batches of convo-\nlutions appended with rectified linear units (ReLU) and ending with a max unit. The convolutional\nunit comprises of convolutions with 15 filters of width 5 as depicted in Figure [3] The max unit\ncomputes the maximum of 15 responses per point to yield an intermediate output after each stage.\nThe final stage is followed by a linear layer which linearly combines the responses to yield the final\noutput. Since, every iteration of convolution results in a reduction of the sequence length, sufficient\npadding is provided on both ends of the curve. This ensures that the value of the signature at a point\nis the result of the response of the computation resulting from the filter centered around that point.\n| | Conv | |ReLU] | Cony | |ReLU] | Max | ->}->]] Conv | |ReLU] | Conv | |ReLU] | Max ||] | Conv | |ReLU] | Conv} |ReLU] P} Linear >}\n15 15 15\nFilters, Filters, Filters, Linear\nWidth=5 Width=5 Width=5 Width=5 Layer\n\nInput Outpu\n\nCurve\nnature\nWe employ the contrastive loss function (Chopra et al.|(2005); (2006)) for training our\n\nnetwork. The Siamese configuration comprises of two identical networks of Figure |3| computing\nsignatures for two separate inputs of data. Associated to each input pair is a label which indicates\nwhether or not that pair is a positive (\\ = 1) or a negative (A = 0) example (Figure 2h. Let Ci;\nand C2; be the curves imputed to first and second arm of the configuration for the i\u2018\u201d example of\nthe data with label \\;. Let Sg(C)) denote the output of the network for a given set of weights \u00a9 for\n\ninput curve C. The contrastive loss function is given by:\nc(\u00ae) o-Ls Xi || Se(C1z)\u2014Se (Cai) |] + I-Ai)_ max( 0, 44 \u2014 || Se(C1s)\u2014Se (C2) || )}.\n\ni=1\n1=IN\n\n(0) = LDS \u00ae || So(Cus)-So(Cas) |] + 1-21) max( 0, = || So(Cz)-So(C2s) [I)}.\ni=l\n(3)\n\nvhere jz is a cross validated hyper-parameter known as margin which defines the metric threshold\n30\nEpoc!\n\nSIS9OH\nArey O 2%\nASIA U=\nDOO #8 &\nVAVAASY\nFigure 4: Contours extracted from the MPEG7 Database and the error plot for training."}, {"section_index": "4", "section_name": "3.3. BUILDING REPRESENTATIVE DATASETS AND IMPLEMENTATION", "section_text": "In order to train for invariance, we need to build a dataset with two major attributes: First, it need:\nto contain a large number of examples of the transformation and second, the curves involved ir\nthe training need to have sufficient richness in terms of different patterns of sharp edges, corners\nsmoothness, noise and sampling factors to ensure sufficient generalizability of the model. To suffi:\nciently span the space of Euclidean transformations, we generate random two dimensional rotations\nby uniformly sampling angles from [\u20147, 7]. The curves are normalized by removing the mean anc\ndividing by the standard deviation thereby achieving invariance to translations and uniform scaling\nThe contours are extracted from the shapes of the MPEG7 Database 0) as showr\nin first part of Figure [4] It comprises a total of 1400 shapes containing 70 different categories o:\nobjects. 700 of the total were used for training and 350 each for testing and validation. The positive\nexamples are constructed by taking a curve and randomly transforming it by a rotation, translatior\nand reflection and pairing them together. The negative examples are obtained by pairing curves\nwhich are deemed dissimilar as explained in Section/4] The contours are extracted and each contout\nis sub-sampled to 500 points. We build the training dataset of 10, 000 examples with approximately\n50% each for the positive and negative examples. The network and training is perrormed using the\nTorch library|Collobert et al (2002). We trained using Adagrad[Duchi et al.| (2011) (2011) at a learning rate\nof 5 x 10-4 and a batch size of 10. We set the contrastive loss hyperparameter margin jz = 1 anc\nFigure[4]shows the error plot for training and the convergence of the loss to a minimum. The rest o:\nthis work describes how we can observe and extend the efficacy of the trained network on new data"}, {"section_index": "5", "section_name": "4 MULTI-SCALE REPRESENTATIONS", "section_text": "A valuable insight for multi-scale representations is provided in the theorems of Gage, Hamiltor\nand Grayson (Gage & Hamilton] [Grayson] (1987)). It says that if we evolve any smooth non.\nintersecting planar curve with mean curvature flow, which is invariant to Euclidean transformations\nit will ultimately converge into a circle before vanishing into a point. The curvature corresponding tc\nthis evolution follows a profile as shown in Figure|5| going from a possibly noisy descriptive feature\nto a constant function. In our framework, we observe an analogous behavior in a data-dependen\nsetting. The positive part of the loss function (\\ = 1) forces the network to push the outputs of the\npositive examples closer, whereas the negative part (A = 0) forces the weights of network to pust\nthe outputs of the negative examples apart, beyond the distance barrier of y. If the training data doe:\nnot contain any negative example, it is easy to see that the weights of the network will converge tc\n4 point which will vield a constant output that trivially minimizes the loss function in Equation|3\n30\nEpo\n\nSIS9OH\nArey O 2%\nASIA U=\nDOO #8 &\nCGWOVAdS\nInvariant representations at varying levels of abstraction have a theoretical interest as well as prac-\ntical importance to them. Enumeration at different scales enables a hierarchical method of analysis\nwhich is useful when there is noise and hence stability is desired in the invariant. As mentioned\nin Section|2} the invariants constructed from scale-space methods and integral invariants, naturally\nallow for such a decomposition by construction.\nFigure 5: Curve evolution and the corre-\nsponding curvature profile.\nce tt Tah My\nce tw Al aw\nFigure 6: Experiments with multi-scale representations. Each signature is the output of a network\ntrained on a dataset with training examples formed as per the rows of Table[]] Index! indicates low\nand 5 indicates a higher level of abstraction.\nDesigning the negative examples of the training data provides the means to obtain a multi-scale\nrepresentation. Since we are training for a /ocal descriptor of a curve, that is, a function whose value\nat a point depends only on its local neighborhood, a negative example must pair curves such that\ncorresponding points on each curve must have different local neighborhoods. One such possibility\nis to construct negative examples which pair curves with their smoothed or evolved versions as in\nTable[I] Minimizing the loss function in equation|3|would lead to an action which pushes apart the\nsignatures of the curve and its evolved or smoothed counterpart, thereby injecting the signature with\nfidelity and descriptiveness. We construct separate data-sets where the negative examples are drawn\nas shown in the rows of TabldI]and train a network model for each of them using the loss function\nIn our experiments we perform smoothing by using a local polynomial regression with weighted\ninear least squares for obtaining the evolved contour. Figure |6|shows the outputs of these different\nnetworks which demonstrate a scale-space like behavior.\nAbility to handle low signal to noise ratios and efficiency of computation are typical qualities desirec\nin a geometric invariant. To test the numerical stability and robustness of the invariant signatures\nPositive Example\n\nNegative Example\n\nScale Index\n\nLow\n\nHigh\nPositive Example\nTable 1: Examples of training pairs for different scales. Eact\nrow indicates the pattern of training examples for a different\nscale.\nar PS\nBay RS\n\nifferential Invariant \u2014sIntegral Invariai\nFigure 7: Stability of different signatures in varying levels noise and Euclidean transformations. The\ncorrespondence for the shape and the signature is the color. All signatures are normalized.\nwe designed two experiments. In the first experiment, we add increasing levels of zero-mean Gaus\nsian noise to the curve and compare the three types of signatures: differential (Euclidean curvature\nintegral (integral area invariant) and the output of our network (henceforth termed as network ir\nvariant) as shown in Figure [7] Apart from adding noise, we also rotate the curve to obtain a bette\nassessment of the Euclidean invariance property. In Figure[8| we test descriptiveness of the signatur\nunder noisy conditions in a shape retrieval task for a set of 30 shapes with 6 different categories. Fc\nevery curve, we generate 5 signatures at different scales for the integral and the network invariar\nand use them as a representation for that shape. We use the Hausdorff distance as a distance measur\n\n(Bronstein et al.|(2008)) between the two sets of signatures to rank the shapes for retrieval. Figure|\nand|8]demonstrate the robustness of the network especially at high noise levels.\nWe have demonstrated a method to learn geometric invariants of planar curves. Using just positive\nand negative examples of Euclidean transformations, we showed that a convolutional neural network\nIn the second experiment, we decimate a high resolution contour at successive resolutions by ran-\ndomly sub-sampling and redistributing a set of its points (marked blue in Figure|9) and observe the\nsignatures at certain fixed points (marked red in Figure i) on the curve. Figure |9]shows that the\nnetwork is able to handle these changes in sampling and compares well with the integral invariant.\nFigures|7|and Figure[S|represent behavior of geometric signatures for two different tests: large noise\nfor a moderate strength of signal and low signal for a moderate level of noise.\nGRVNCSO\nSKSO EO\nGCRMSEO\n\nSz ZF D tk\n\nGES OWO\u00ae\n\noo FN\n\nost \\\n\u2018\n\n7\n\n\u201c06\n\nBos\n\n=\n\nAu oa\n\nou\nFigure 8: 5 shape contours of 6 different categories and the shape retrieval results for this set for\ndifferent noise levels.\n70%\n\n50% 30%\n4,\naN ra\n; AN\n> i %\n20% 10%\nttn\ni 4\ng2s4 3 bab\n. Differential Invariant\nian\nFigure 9: Testing robustness of signatures to different sampling conditions. The signatures are\nevaluated at the fixed red points on each contour and the density and distribution of the blue points\nalong the curve is varied from 70% to 5% of the total number of points of a high resolution curve.\nis able to effectively discover and encode transform-invariant properties of curves while remaining\nnumerically robust in the face of noise. By using a geometric context to the training process we were\nable to develop novel multi-scale representations from a learning based approach without explicitly\nenforcing such behavior. As compared to a more axiomatic framework of modeling with differentia!\ngeometry and engineering with numerical analysis, we demonstrated a way of replacing this pipeline\nwith a deep learning framework which combines both these aspects. The non-specific nature of this\nframework can be seen as providing the groundwork for future deep learning data based problems\nin differential geometry."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "This project has received funding from the European Research Council (ERC) under the European\nUnions Horizon 2020 research and innovation program (grant agreement No 664800)"}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "M Ackerman. Sophus Lie\u2019s 1884 Differential Invariant Paper. Math Sci Press, 1976\nJane Bromley, James W Bentz, L\u00e9on Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard\nSackinger, and Roopak Shah. Signature verification using a siamese time delay neural network.\nInternational Jaurnal af Pattern Rerneonition and Artificig] Intellicence 7T(N4)669\u2014-6R2 1992\nAlexander M Bronstein, Michael M Bronstein, and Ron Kimmel. Numerical geometry of non-rigic\nshapes. Springer Science & Business Media, 2008.\nRonan Collobert, Samy Bengio, and Johnny Mari\u00e9thoz. Torch: a modular machine learning software\nlibrary. Technical report, Idiap, 2002.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and\nstochastic optimization. Journal of Machine Learning Research, 12(Jul):2121\u20142159, 2011.\nMichael Gage and Richard S Hamilton. The heat equation shrinking convex plane curves. Journal\nof Differential Geometry, 23(1):69\u201496, 1986.\nMatthew A Grayson. The heat equation shrinks embedded plane curves to round points. Journal oj\nDifferential geometry, 26(2):285\u2014314, 1987.\nRaia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant\nmapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recogni-\ntion (CVPR\u201906), volume 2, pp. 1735-1742. IEEE, 2006.\nSumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with\napplication to face verification. In 2005 IEEE Computer Society Conference on Computer Vision\nand Pattern Recognition (CVPR\u201905), volume 1, pp. 539-546. IEEE, 2005.\nYann LeCun, Sumit Chopra, and Raia Hadsell. A tutorial on energy-based learning. 2006.\nJonathan Masci, Davide Boscaini, Michael Bronstein, and Pierre Vandergheynst. Geodesic con-\nvolutional neural networks on riemannian manifolds. In Proceedings of the IEEE International\nConference on Computer Vision Workshops, pp. 37-45, 2015.\nFarzin Mokhtarian and Alan K Mackworth. A theory of multiscale, curvature-based shape repre-\nsentation for planar curves. JEEE Transactions on Pattern Analysis and Machine Intelligence, 14\n(8):789-805, 1992.\nHelmut Pottmann, Johannes Wallner, Qi-Xing Huang, and Yong-Liang Yang. Integral invariants for\nrobust geometry processing. Computer Aided Geometric Design, 26(1):37\u201460, 2009.\nGuillermo Sapiro and Allen Tannenbaum. Area and length preserving geometric invariant scale-\nspaces. IEEE Transactions on Pattern Analvsis and Machine Intelligence. 17(1):67\u201472. 1995.\nJin Xie, Yi Fang, Fan Zhu, and Edward Wong. Deepshape: Deep learned shape descriptor for 3d\nshape matching and retrieval. In Proceedings of the IEEE Conference on Computer Vision ana\nPattern Recognition, pp. 1275-1283, 2015.\nSiddharth Manay, Byung-Woo Hong, Anthony J Yezzi, and Stefano Soatto. Integral invariant signa-\ntures. In European Conference on Computer Vision, pp. 87-99. Springer, 2004.\nFigure 10: (a) Standard 1D Gaussian filters and its derivatives used for curvature and curvature scale\nspace calculations. (b) Some of the filters from the first layer of the network proposed in this paper.\nOne can interpret the shapes of the filters in (b) as derivative kernels which are learned from data\nand therefore adapted to its sampling conditions.\n05\n\n05\n\n05\n\n05\n\n05\n\n05\n\n\u00b0 \u00b0 74"}]
rJ8Je4clg
[{"section_index": "0", "section_name": "LEARNING TO PLAY IN A DAY: FASTER DEEP REIN-\nFORCEMENT LEARNING BY OPTIMALITY TIGHTENING", "section_text": "Frank S. He\nDepartment of Computer Science\nUniversity of Illinois at Urbana-Champaigr\nZhejiang University\nDepartment of Electrical and Computer Engineering\nUniversity of Illinois at t Urbana- Champaign\nWe propose a novel training algorithm for reinforcement learning which com-\nbines the strength of deep Q-learning with a constrained optimization approach\nto tighten optimality and encourage faster reward propagation. Our novel tech-\nnique makes deep reinforcement learning more practical by drastically reducing\nthe training time. We evaluate the performance of our approach on the 49 games\nof the challenging Arcade Learning Environment, and report significant improve-\nments in both training time and accuracy."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The recent advances of supervised\n\ndeep learning techniques (LeCun et al., 2015) in computer vision.\n\nspeech recognition and natural language processing have tremendously improved the performance\non challenging tasks, including image processing (Krizhevsky et al., 2012), speech-based transla-\n\ntion (Sutskever et al., 2014) and\nlearning is to use artificial neural\nabstractions and representations\n\nfar from building intelligent solu\u2019\n\nnetworks to mo\n\nlanguage modeling (Hinton et al., 2012). The core idea of deep\n\nlel complex hierarchical or compositional data\n\nrom raw input data (Bengio et al., 2013). However, we are still\ntions for many real-world challenges, such as autonomous driv-\n\ning, human-computer interaction and automated decision making, in which software agents need to\n\nconsider interactions with a dyna\nlearning (Bertsekas & Tsitsiklis,\nstudies these problems and algori'\n\n996; Powell, 201\n\nmic environment and take actions towards goals. Reinforcement\n\n; Sutton & Barto, 1998; Kaelbling et al., 1996)\n\nthms which learn policies to make decisions so as to maximize a\n\nreward signal from the environment. One of the promising algorithms is Q-learning (Watkins, 1989.\n\nWatkins & Dayan, 1992). Deep r\n\neinforcement learning with neural function approximation (Tsit-\n\nsiklis & Roy, 1997; Riedmiller, 2005; Mnih et al., 2013; 2015), possibly a first attempt to combine\n\ndeep learning and reinforcement\n\nlearning, has been\n\nproved to be effective on a few problems which\n\nclassical AI approaches were unable to solve. Notable examples of deep reinforcement learning\ninclude human-level game playing (Mnih et al., 2015) and AlphaGo (Silver et al., 2016).\nDespite these successes, its high demand of computational resources makes deep reinforcemen\nlearning not yet applicable to many real-world problems. For example, even for an Atari game, the\ndeep Q-learning algorithm (also called deep Q-networks, abbreviated as DQN) needs to play up tc\nhundreds of millions of game frames to achieve a reasonable performance (van Hasselt et al., 2015)\nAlphaGo trained its model using a database of game records of advanced players and, in addition\nabout 30 million self-played game moves (Silver et al., 2016). The sheer amount of required com.\nputational resources of current deep reinforcement learning algorithms is a major bottleneck for its\napplicability to real-world tasks. Moreover, in many tasks, the reward signal is sparse and delayed\nthus making the convergence of learning even slower.\nDepartment of Computer Science\n\nTinivearcitu af Whinnic at Tlrhana Cham"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Here we propose optimality tightening, a new technique to accelerate deep Q-learning by fast reward\npropagation. While current deep Q-learning algorithms rely on a set of experience replays, they only\nconsider a single forward step for the Bellman optimality error minimization, which becomes highly\ninefficient when the reward signal is sparse and delayed. To better exploit long-term high-reward\nstrategies from past experience, we design a new algorithm to capture rewards from both forward\nand backward steps of the replays via a constrained optimization approach. This encourages faster\nreward propagation which reduces the training time of deep Q-learning.\nWe evaluate our proposed approach using the Arcade learning environment (Bellemare et al., 2013)\nand show that our new strategy outperforms competing techniques in both accuracy and training\ntime on 30 out of 49 games despite being trained with significantly fewer data frames."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "There have been a number of approaches improving the stability, convergence and runtime of deep\nreinforcement learning since deep Q-learning, also known as deep Q-network (DQN), was first\nproposed (Mnih et al., 2013; 2015). DQN combined techniques such as deep learning, reinforcement\nlearning and experience replays (Lin, 1992; Wawrzynski, 2009).\nNonetheless, the original DQN algorithm required millions of training steps to achieve human-\nlevel performance on Atari games. To improve the stability, recently, double Q-learning was com-\nbined with deep neural networks, with the goal to alleviate the overestimation issue observed ir\nQ-learning (Thrun & Schwartz, 1993; van Hasselt, 2010; van Hasselt et al., 2015). The key idea is\nto use two Q-networks for the action selection and Q-function value calculation, respectively. The\ngreedy action of the target is first chosen using the current Q-network parameters, then the target\nvalue is computed using a set of parameters from a previous iteration. Another notable advance is\n\u201cprioritized experience replay\u201d (Schaul et al., 2016) or \u201cprioritized sweeping\u201d for deep Q-learning\nThe idea is to increase the replay probability of experience tuples that have a high expected learning\nprogress measured by temporal difference errors.\nIn addition to the aforementioned variants of Q-learning, other network architectures have bee!\noroposed. The dueling network architecture applies an extra network structure to learn the impor\nance of states and uses advantage functions (Wang et al., 2015). A distributed version of the deey\nictor-critic algorithm without experience replay was introduced very recently (Mnih et al., 2016)\ntt deploys multiple threads learning directly from current transitions. The approach is applicable t\ndoth value-based and policy-based methods, off-policy as well as on-policy methods, and in discret\n1s well as in continuous domains. The model-free episodic control approach evaluates state-actiot\noairs based on episodic memory using k-nearest neighbors with hashing functions (Blundell et al.\n2016). Bootstrapped deep Q-learning carries out temporally-extended (or deep) exploration, thu\nleading to much faster learning (Osband et al., 2016).\nReinforcement learning considers agents which are able to take a sequence of actions in an environ-\nment. By taking actions and experiencing at most one scalar reward per action, their task is to learn\na policy which allows them to act such that a high cumulative reward is obtained over time.\nMore precisely, consider an agent operating over time t \u20ac {1,...,7'}. At time \u00a2 the agent is in a1\nenvironment state s; and reacts upon it by choosing action a, \u20ac A. The agent will then observe\nnew state s;, 1 and receive a numerical reward r, \u20ac R. Throughout, we assume the set of possibl\nactions, i.e., the set A, to be discrete.\nOur fast reward propagation differs from all of the aforementioned approaches. The key idea of\nour method is to propagate delayed and sparse rewards during Q-network training, and thus greatly\nimprove the efficiency and performance. We formulate this propagation step via a constrained pro-\ngram. Note that our program is also different from earlier work on off-policy Q*(A) algorithms\nwith eligibility traces and n-step Q learning (Munos et al., 2016; Watkins, 1989; Mnih et al., 2016),\nwhich have been recently shown to perform poorly when used for training deep Q-networks on Atari\n\ngames.\nThe core idea of Q-learning is the use of the Bellman equation as a characterization of the optima\nfuture reward function Q* via a state-action-value function\nHereby the expectation is taken w.r.t. the distribution of state s;,, and reward r; obtained after\ntaking action a, and \u00a5 is a discount factor. Intuitively, reward for taking action a plus best future\nreward should equal the best total return from the current state.\nThe choice of Q-function is crucial for the success of Q-learning algorithms. While classical meth\nods use linear Q-functions based on a set of hand-crafted features of the state, more recent ap\nproaches use nonlinear deep neural networks to automatically mine intermediate features from thi\nstate (Riedmiller, 2005; Lange & Riedmiller, 2010; Mnih et al., 2013; 2015). This change ha\nbeen shown to be very effective for many applications of reinforcement learning. However, auto\nmatic mining of intermediate representations comes at a price: larger quantities of data and mor\ncomputational resources are required. Even though it is sometimes straightforward to extract larg:\namounts of data, e.g., when training on video games, for successful optimization, it is crucial that th\nalgorithms operate on un-correlated samples from a dataset D for stability. A technique called \u201cex\nperience replay\u201d (Lin, 1992; Wawrzynski, 2009) encourages this property and quickly emerged as |\nstandard step in the well-known deep Q-learning framework (Mnih et al., 2013; 2015). Experienc'\nreplays are stored as a dataset D = {(s;,a;,7j,8j;+1)} which contains state-action-reward-futur\nstate-tuples (s;.a;.17;. 85.7), including past observations from previous plays.\nThe characterization of optimality given in Eq. (1) combined with an \u201cexperience replay\u201d dataset D\nresults in the following iterative algorithmic procedure (Mnih et al., 2013; 2015): start an episode\nin the initial state so; sample a mini-batch of tuples B = {(s;,a;,7;,8;41)} CG D; compute and\nfix the targets y; = rj + ymaxa Qo-(sj41,@) for each tuple using a recent estimate Qg- (the\nmaximization is only considered if s; is not a terminal state); update the Q-function by optimizing\nthe following program w.r.t. the parameters @ typically via stochastic gradient descent:\nAfter having updated the parameters of the Q-function we perform an action simulation either choos:\ning an action at random with a small probability \u00a2, or by following the strategy arg max, Qo(s:,@\nwhich is currently estimated. This strategy is also called the e-greedy policy. We then obtain the\nactual reward r,. Subsequently we augment the replay memory with the new tuple (s;, a\u00a2, '\u00a2, $14.1,\nand continue the simulation until this episode terminates or reaches an upper limit of steps, anc\nwe restart a new episode. When optimizing w.r.t. the parameter 0, a recent Q-network is used tc\ncompute the target y; = rj + ymax, Qg-(5;41,@). This technique is referred to as \u2018semi-gradien\ndescent,\u2019 i.e., the dependence of the target on the parameter 0 is ignored."}, {"section_index": "4", "section_name": "4. FAST REWARD PROPAGATION VIA OPTIMALITY TIGHTENING", "section_text": "Investigating the cost function given in Eq. (2) more carefully, we observe that it operates on a\nset of short one-step sequences, each characterized by the tuple (s;,a;,7j,5j+1). Intuitively, each\nstep encourages an update of the parameters 6, such that the action-value function for the chosen\naction aj, i.e., Qo (s;, a;), is closer to the obtained reward plus the best achievable future value, i-e..\nYj =1j + Max, Q(s;41, a). As we expect from the Bellman optimality equation, it is instructive\nto interpret this algorithm as propagating reward information from time 7 + 1 backwards to time 7.\nA well established technique to address the aforementioned reinforcement learning task is Q-\nlearning (Watkins, 1989; Watkins & Dayan, 1992). Generally, Q-learning algorithms maintain an\naction-value function, often also referred to as Q-function, Q(s, a). Given a state s, the action-value\nfunction provides a \u2018value\u2019 for each action a \u20ac A which estimates the expected future reward if\naction a \u20ac A is taken. The estimated future reward is computed based on the current state s or a\nseries of past states s, if available.\nQ*(st,a) = Elre + ymax Q*(si41,@)]-\nmin Ss (Qo(s;,4;) \u2014 yj)\u201d -\n\n(s;,0),7j 8) 41)\u20acB\nTo understand the shortcomings of this procedure consider a situation where the agent only receives\na sparse and delayed reward once reaching a target in a maze. Further let | P| characterize the short-\nest path from the agents initial position to the target. For a long time, no real reward is available\nIn the following we propose a technique which increases the speed of propagation and achieve\nimproved convergence for deep Q-learning. We achieve this improvement by taking advantage o\nlonger state-action-reward-sequences which are readily available in the \u201cexperience replay memory.\nNot only do we propagate information from time instances in the future to our current state, bu\nalso will we pass information from states several steps in the past. Even though we expect to se\nsubstantial improvements on sequences where rewards are sparse or only available at terminal states\nwe also demonstrate significant speedups for situations where rewards are obtained frequently. Thi\nis intuitive as the Q-function represents an estimate for any reward encountered in the future. Faste\npropagation of future and past rewards to a particular state is therefore desirable.\nSubsequently we discuss our technique for fast reward propagation, a new deep Q-learning algo.\nrithm that exploits longer state-transitions in experience replays by tightening the optimization viz\nconstraints. For notational simplicity, we assume that the environmental dynamics is deterministic\ni.e., the new state and the reward are solely determined by the current state and action. It is possible\nto show that mathematically our proposed approach also approximately works in stochastic environ.\nments. Please see details in the appendix. From the Bellman optimality equation we know that the\nfollowings series of equalities hold for the ontimal O-function O*:\nQ\" (83,43) = rj + ymax Q\"(sj41,4) = 7j + ymax [risa + ymax [rise + ymax Q\"(s;+3, a)| .\nEvaluating such a sequence exactly is not possible in a reinforcement learning setting since th\nenumeration of intermediate states s;.; requires exponential time complexity O(|A|'). It is howeve\npossible to take advantage of the episodes available in the replay memory D by noting that th\nfollowing sequence of inequalities holds for the optimal action-value function Q* (with the greed)\npolicy), irrespective of whether a policy 7 generating the sequence of actions aj, aj+1, etc., whicl\nresults in rewards r;, 7;41, etc. is optimal or not:\nkK\nQ*(s;,a;) =17;5+ ymax Q\"(sj41; al >... >So airjgi taht max Q\"(s; +441, a)=Lj,\ni=0\nNote the definition of the lower bounds L; ,, for sample j and time horizon k in the aforementionec\nseries of inequalities.\nWe can also use this series of inequalities to define upper bounds. To see this note that\nk\nQ*(8j-b=1, 5-4-1) \u2014 D0 754-141 \u2014 1 Q\"* (3,07) = 0\ni=0\nwhich follows from the definition of the lower bound by dropping the maximization over the actions,\nand a change of indices from j7 + j \u2014 k \u2014 1. Reformulating the inequality yields an upper bound\nU*.,. for sample 7 and time horizon k by fixing state s; and action a; as follows:\nk\nUF, = 1Q*(8j\u2014-4-154j\u2014k-1) \u2014 Soi tyes > Q*(s;,45)\ni=0\nIn contrast to classical techniques which optimize the Bellman criterion given in Eq. (2), we propose\nto optimize the Bellman equation subject to constraints Qo(s;, aj) = Li = maxpe(i,...K} Lj,k\nwhich defines the largest lower bound, and Qg(sj,a;) < UP\" = minge(1,....\u00ab} Uj,n, which speci-\nfies the smallest upper bound. Hereby, L;,, and Uj, are computed using the Q-function Qg- with\na recent estimated parameter 9~ rather than the unknown optimal Q-function Q*, and the integer A\u2019\nspecifies the number of future and past time steps which are considered. Also note that the target\nused in the Bellman equation is obtained from y; = Lj,o = rj + ymaXq Qo-(sj41,a). In this\nway, we ignore the dependence of the bounds and the target on the parameter @ to stabilize the train-\ning. Taking all the aforementioned definitions into account, we propose the following program for\nand the aforementioned algorithm propagates randomly initialized future rewards. Once the target\nis reached, real reward information is available. Due to the cost function and its property of prop-\nagating reward time-step by time-step, it is immediately apparent that it takes at least an additional\n\u00a9O(|P]|) iterations until the observed reward impacts the initial state.\nOu\n\ntput : Parameters 6 of a Q-function\n\nInitialize: 0 randomly, set 90~ = 0\n\nfor\n\nend\n\nepisode + 1 to M do\n\ninitialize 51;\n\nfort + 1toT do\n\nChoose action a; according to e-greedy strategy;\n\nObserve reward r, and next state 5,41;\n\nStore the tuple (s;, az, T+, +, 8:41) in replay memory D;\n\nSample a minibatch of tuples B = {(s;,a;,17;, Rj, ;41}) from replay memory D;\nUpdate 6 with one gradient step of cost function given in Eq. (4);\nReset 6~ = 0 every C steps;\n\nend\n\nfor t \u2014 T to1 do\n\nCompute Ry = r, + yRi41;\n\nInsert R; into the corresponding tuple in replay memory D;\n\nend\n\nAloorithm 1\u00b0 Our aloorithm for fast reward nronacation in reinforcement learnine tasks.\nAlgorithm 1: Our algorithm for fast reward propagation in reinforcement learning tasks.\nreinforcement learning tasks:\n. ; \\2 Qo(sj,a;) = LP V (s;,4;) \u20ac B\nmn Ss (Qe(sj,4;) \u2014 yj) S.t. { Qals;, aj) < uinin V (s;,4;) eB\nThis program differs from the classical approach given in Eq. (2) via the constraints, which is cru-\ncial. Intuitively, the constraints encourage faster reward propagation as we show next, and result in\ntremendously better results as we will demonstrate empirically in Sec. 5.\nBefore doing so we describe our optimization procedure for the constrained program in Eq. (3) more\ncarefully. The cost function is generally non-convex in the parameters 0, and so are the constraints.\nWe therefore make use of a quadratic penalty method to reformulate the program into\nmin $7 |(Qo(s;,4;) \u2014 yy)? + AULP** \u2014 Qo(s;,43))} + NQo(s.4;) \u2014 UP\") |\n, j +]>\n\n(85,\u20ac;,.77),8;41)\u20acB\nwhere 4 is a penalty coefficient and (a), = max(0, x) is the rectifier function. Augmenting the cos\nfunction with A(LP* \u2014 Qo(sj,a;))%. and/or A(Qo(s;,a;) \u2014 U)4 results in a penalty wheneve\nany optimality bounding constraint gets violated. The quadratic penalty function is chosen for sim\nplicity. The penalty coefficient \\ can be set as a large positive value or adjusted in an annealin;\nscheme during training. In this work, we fix its value, due to time constraints. We optimize this cos\nfunction with stochastic (sub-)gradient descent using an experience replay memory from which w\nrandomly draw samples, as well as their successors and predecessors. We emphasize that the deriva\ntives correcting the prediction of Q(s;,a;) not only depend on the Q-function from the immediatel\nsuccessive time step Q(s;+1, a) stored in the experience replay memory, but also on more distan\ntime instances if constraints are violated. Our proposed formulation and the resulting optimizatiot\ntechnique hence encourage faster reward propagation, and the number of time steps depends ot\nthe constant Kv and the quality of the current Q-function. We summarize the proposed method it\nAlgorithm 1.\nThe computational complexity of the proposed approach increases with the number of considerec\ntime steps A\u2019, since additional forward passes are required to compute the bounds L'** and up\nHowever, we can increase the memory size on the GPU to compute both the bounds and targets ir\na single forward pass if K is not too large. If at all a problem, we can further alleviate this increas\u00a2\nby randomly sampling a subset of the constraints rather than exhaustively using all of them. More\ninformed strategies regarding the choice of constraints are possible as well since we may expec\nlower bounds in the more distant future to have a larger impact early in the training. In contrast once\nthe algorithm is almost converged we may expect lower bounds close to the considered time-step tc\nhave bigger impact.\nTo efficiently compute the discounted reward over multiple time steps we add a new element to\nthe experience replay structure. Specifically, in addition to state, action, reward and next state fot\nYo tl CL ER JOUUND Je}S\n% 19'S FEE siapeau| eoeds\n\n% cL or-\n%ecy\n% 9G ly i\n% 6e'ec- I\n% zo've- El\n% Loz\n% lel i\n%zV'sl\n% LOL\n%982-\n% etl\n%S0'9 |\nwoe |\n%6z'e- |\n%98%- |\n% 80%- |\n%eLt |\n% vo'0\n% 10\n% \u20acS'0\n% OL\n% 92'L\n% 9'L\n% Ee\n% ves\n% p29\n% VOL\n% Leb\n% LO'EL\n% EZ'EL\n% 19\u00b081\n% 161\n%eoet\n% rest Ti\n%oc le Mi\n% 98'S\n%9209\nS19\n%969\nih,\n%eccs\nies\n% oC)!\noc)\n92'S?\n% \u00a30 6S?\nSL),\n\ndaydos,\nqnoyeaig\nyoeny uoWweg\napadyuag\nyuejoqoy\n\nJOlld SWI\nually\n\nAayooy 39}\nJepiuy\naunque,\nxuaysy\nysonbeas\npuewwog Jaddoyg\nUeWed \u201cS|\npley Jeary\nBAX ayeAd\nsplouaysy\nJaquiljg Azei9,\n\u2018OWFH\nabuaray s,ewnza}uoy\naweg siy| eweN\nBuod\n\nJee\nAemaal4\nBulymog\nAqueq Buiysi4\nsiuua|\n\nsu0Z aeg\nBuixog\nH9g,0\n\nJepry weeg\nJayseyy nj6uny\nysiaH yueg\nweyyueyn |\nJON JO pleziM,\nyinessy\nJauuny peoy\nuoxxeZ\n\numoq pue dp,\nooreBuey\nayqysol4\npuoqsawer\nounpu3\nHEQUId O8PIA\nny\n\nyung eiqnoq\nsqueny\nFigure 1: Improvements of our method trained on 10M frames compared to results of 200M frame\nDQN training presented by Mnih et al. (2015), using the metric given in Eq. (5).\ntime-step j, we also store the real discounted return R; which is the discounted cumulative return\nachieved by the agent in its game episode. R; is computed via Rj = wry 7~/r,, where T is the\nend of the episode and 7\u00a5 is the discount factor. R; is then inserted in the replay memory after the\ntermination of the current episode or after reaching the limit of steps. All in all, the structure of our\nexperience replay memory consists of tuples of the form (s;,a;,1;,.R;,8;+41). In practice, we also\nfound that incorporating R; in the lower bound calculation can further improve the stability of the\ntraining.\nWe leave the questions regarding a good choice of penalty function and a good choice of the penalty\ncoefficients to future work. At the moment we use a quadratic penalty function and a constan!\npenalty coefficient identical for both bounds. More complex penalty functions and sophisticatec\noptimization approaches may yield even better results than the ones we report in the following."}, {"section_index": "5", "section_name": "5 EXPERIMENTS", "section_text": "We evaluate the proposed algorithm on a set of 49 games from the Arcade Learning Environ\nment (Bellemare et al., 2013) as suggested by Mnih et al. (2015). This environment is considered t\nbe one of the most challenging reinforcement learning task because of its high dimensional output\nMoreover, the intrinsic mechanism varies tremendously for each game, making it extremely de\nmanding to find a single, general and robust algorithm and a corresponding single hyperparamete\nsetting which works well across all 49 games.\nFollowing existing work (Mnih et al., 2015), our agent predicts an action based on only raw image\npixels and reward information received from the environment. A deep neural network is used a:\nthe function approximator for the Q-function. The game image is resized to an 84 x 84 grayscale\nimage s;. The first layer is a convolutional layer with 32 filters of size 8 x 8 and a stride of 4; the\nsecond layer is a convolutional layer with 64 filters of size 4 x 4 and stride of 2; the third layer i:\na convolutional layer with 64 filters of size 3 x 3 and a stride of 1; the next fully connected laye:\ntransforms the input to 512 units which are then transformed by another fully connected layer to ar\noutput size equal to the number of actions in each game. The rectified linear unit (ReLU) is used a:\nthe activation function for each layer. We used the hyperparameters provided by Mnih et al. (2015.\nfor annealing \u00a2-greedy exploration and also applied RMSProp for gradient descent. As in previou:\nwork we combine four frames into a single step for processing. We chose the hyperparamente!\nXK = 4, for GPU memory efficiency when dealing with mini-batches. In addition, we also include\nthe discounted return Rj; = Lj,.. in the lower bound calculation to further stabilize the training. We\nuse the penalty coefficient \\ = 4 which was obtained by coarsely tuning performance on the game:\n\u2018Alien, \u2018Amidar,\u2019 \u2018Assault, and \u2018Asterix.\u2019 Gradients are also rescaled so that their magnitudes are\ncomparable with or without penalty. All experiments are performed on an NVIDIA GTX Titan-x\n12GB graphics card.\nme-step 7, we also store the real discounted return 4; which 1s the discounted cumulative return\nreso &\n%S0'S- |\n% lve |\n%Se'0 |\n% ESO |\n% 950 |\nwz |\n% Loe |\n%eor |\n\n%6rS |\n\n%zeS |\n\n%oe8 ff\n\n%ore ff\n\n%lze ff\n\nwere Wf\n\n% LLL\n\n% zoel Bl\n\n%IS)\n\n% LOL\n\n% 276)\n\n% Lo0z\n\n%z0z\n\n% ere\n\n% rose\n\n%Sl2 Hl\n\n%1S0e Hl\n\n%szze Me\n\n%Vec il\n\n%vese Ml\n\n%oL0r Tl\n\n%erly\n\n%o8'sy\n\n%oles\n\n% 7S'3S\n\n%06S\n\n% l7So\n\n%029 J\n\n% 99>.\n\n% ors\n\n% soso\n\n% orcs\n\n% leo\n\naa ____|\n\n% 676\n\n% 6766\n\n% OF |\n\nvy) S|\n\n% C18?\n\nS|\n\nHIOALOIG\nyoeny uoWweq\nUBWOeY \u201cS|\n\n9A ayeAud\nabuanay s,ewinzajuo\nBuog\n\nawed sly eweN\nsplosaysy\n\nepadquag\n\nJepluy\n\nJeyAelg\n\nually\nysloH yueg\njsonbeas\nxueIsy\nsuapeAu| aoeds\nainque,,\nyuejoqoy\nPIEY JOATY\neuoz eed\nJOUUND Je}S\nBuixog\nJaquillg Aze19,\nsluue |,\ndaydop\nJopry weeg\nJouuny peoy\nJayseyy nj6uny\noinpuy\nAayo0H 89}\npuewwog Jeddoya\nweyyueyny\n\u2018OWFH\nJEquid OPI,\nueg.0\nAemeai4\nJOM $0 plezi\\\naq\\sory\nAquag Burysi4\noolebuey\numogq pue dj\nUOXxeZ\nynessy\nJOllg OWL\npuogsower\nIndy\nyung eiqnoq\nsqueny\nFigure 2: Improvements of our method trained on 10M frames compared to results of 10M frame\nDQN training, using the metric given in Eq. (5)."}, {"section_index": "6", "section_name": "5.1 EVALUATION", "section_text": "In previous work (Mnih et al., 2015; van Hasselt et al., 2015; Schaul et al., 2016; Wang et al., 2015),\nthe Q-function is trained on each game using 200 million (200M) frames or 50M training steps. We\ncompare to those baseline results obtained after 200M frames using our proposed algorithm which\nran for only 10M frames or 2.5M steps, i.e., 20 times fewer data, due to time constraints. Instead of\ntraining more than 10 days we manage to finish training in less than one day. Furthermore, for a fait\ncomparison, we replicate the DQN results and compare the performance of the proposed algorithm\nafter 10M frames to those obtained when training DQN on only 10M frames.\nWe strictly follow the evaluation procedure in (Mnih et al., 2015) which is often referred to as \u201830\nno-op evaluation.\u2019 During both training and testing, at the start of the episode, the agent always\nperforms a random number of at most 30 no-op actions. During evaluation, our agent plays each\ngame 30 times for up to 5 minutes, and the obtained score is averaged over these 30 runs. An e-\ngreedy policy with \u00ab = 0.05 is used. Specifically, for each run, the game episode starts with at most\n30 no-op steps, and ends with \u2018death\u2019 or after a maximum of 5 minute game-play, which corresponds\nto 18000 frames.\nOur training consists of IM = 40 epochs, each containing 250000 frames, thus 10M frames ir\ntotal. For each game, we evaluate our agent at the end of every epoch, and, following commot\npractice (van Hasselt et al., 2015; Mnih et al., 2015), we select the best agent\u2019s evaluation as the\nresult of the game. So almost all hyperparameters are selected identical to Mnih et al. (2015) anc\nNair et al. (2015).\nTo compare the performance of our algorithm to the DQN baseline, we follow the approach of Wang\net al. (2015) and measure the improvement in percent using\nSCOFE Agent \u2014 SCOFERaseline\n\nmax{Scorepyuman, SCOFreBaseline } \u2014 SCOFERandom\nFig. 1 shows the improvement of our algorithm over the DQN baseline proposed by Mnih et al\n(2015) and trained for 200M frames, i.e., 50M steps. Even though our agent is only trained for 10M\nframes, we observe that our technique outperforms the baseline significantly. In 30 out of 49 games\nour algorithm exceeds the baseline using only 5% of the baseline\u2019s training frames, sometimes\ndrastically, e.g., in games such as \u2018Atlantis,\u2019 \u2018Double Dunk,\u2019 and \u2018Krull.\u2019 The remaining 19 games\noften require a long training time. Nonetheless, our algorithm still reaches a satisfactory level ot\nperformance.\nWe select this approach because the denominator choice of either human or baseline score prevents\nnsignificant changes or negative scores from being interpreted as large improvements.\nTable 1: Mean and median human-normalized scores. DQN baseline and D-DQN results are from\nMnih et al. (2015); van Hasselt et al. (2015) and trained with 200M frames while our method is\ntrained with 10M frames. Note that our approach can be combined with the D-DON method.\nFrostoite\n\nAulantis:\n\naxxon\n\n3500) 180000 8000,\n2000||\u2014 D0 1600001] \u2014 oon 7000|| = pon\nBah ten = oon + return = oon + return\n= baw 140000|]\u2014 23t 60001] BN\n>s00||\u2014 Pane Es Es\n120000) 5000\nwy 2000 2 100000) \u00a9 40001\n3 asoo 4 80000 3 3000\n60000 2000\n1000)\n40000] 1000)\na 20000) oO\n\u00b0% 2 a G 8 10 \u00b0% 2 a G 8 10 10005 2 a G 8 \u201810\nTraining Frames (1e6) jing Frames (1e6)\nSi HERO. aa Q*Bert an Chopper Command\n= oan = oan = oan\n= bON + retuen 10000) \u2014 pan + return 4000||\u2014 ON + return\n15000|| \u2014 cow = bone = bone\n2000\n: : 3000\n\u00a7 10000 \u00a7 6000 3\na 2000)\n4000\n5000 |\n2000)\n0 10 0 10 0\n\nTo\nAs suggested by van Hasselt et al. (2015), we use the following score\nne ee ENDS\nto summarize the performance of our algorithm in a single number. We normalize the scores o!\nour algorithm, the baseline reported by Mnih et al. (2015), and double DQN (D-DQN) (van Hassel\net al., 2015), and report the training time, mean and median in Table 1. We observe our technique\nwith 10M frames to achieve comparable scores to the D-DQN method trained on 200M frames (var\nHasselt et al., 2015), while it outperforms the DQN method (Mnih et al., 2015) by a large margin. We\nbelieve that our method can be readily combined with other techniques developed for DQN, suct\nas D-DQN (van Hasselt et al., 2015), prioritized experience replay (Schaul et al., 2016), dueling\nnetworks (Wang et al., 2015), and asynchronous methods (Mnih et al., 2016) to further improve the\naccuracy and training speed.\nIn Fig. 3 we illustrate the evolution of the score for our algorithm and the DQN approach. In addition\nwe demonstrate two additional techniques: \u2018DQN+return\u2019 and \u2018DQN(A).\u2019 \u2018DQN+return\u2019 uses only\nthe discounted future return as a bound, but does not take advantage of the additional constraints\nwe propose. \u2018DQN(A)\u2019 combines TD- with the DQN algorithm. We illustrate the performance of\nthose four algorithms on the six games \u2018Frostbite,\u2019 \u2018Atlantis,\u2019 \u2018Zaxxon, \u2018H.E.R.O, \u2018Q*Bert, and\n\u201cChopper Command.\u2019 We observe our method to achieve higher scores than the three baselines on\nthe majority of the games. We refer the reader to the supplementary material for additional results."}, {"section_index": "7", "section_name": "6 CONCLUSION", "section_text": "In this paper we proposed a novel program for deep Q-learning which propagates promising rewards\nto achieve significantly faster convergence than the classical DQN. Our method significantly outper-\nforms competing approaches even when trained on a small fraction of the data on the Atari 2600\ndomain. In the future, we plan to investigate the impact of penalty functions, advanced constrained\noptimization techniques and explore potential synergy with other techniques.\nTraining Time Mean Median\n\nOurs (10M) less than 1 day (1 GPU) | 345.70% 105.74%\nDQN (200M) more than 10 days (1 GPU) | 241.06% 93.52%\nD-DQN (200M) |] more than 10 days (1 GPU) | 330.3% 114.7%\nFigure 3: Game scores for our algorithm (blue), DQN (black), DQN+return (red) and DQN(A)\n(yellow) using 10M training frames. 30 no-op evaluation is used and moving average over 4 points\nis applied.\nIn order to further illustrate the effectiveness of our method, we compare our results with our imple-\nmentation of DQN trained on 10M frames. The results are illustrated in Fig. 2. We observe a better\nperformance on 46 out of 49 games. demonstrating in a fair way the potential of our technique."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation\nplatform for general agents. J. of Artificial Intelligence Research, 2013.\n\nY. Bengio, A. Courville, and P. Vincent. Representation Learning: A Review and New Perspectives. PAMI,\n2013.\n\nD. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.\n\nC. Blundell, B. Uria, A. Pritzel, Y. Li, A. Ruderman, J. Z. Leibo, J. Rae, D. Wierstra, and D. Hassabis. Model-\nFree Episodic Control. In http://arxiv.org/pdf/1606.04460v1.pdf, 2016.\n\nG.E. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-R. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N.\nSainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared\nviews of four research groups. IEEE Signal Processing Magazine, 2012.\n\nL. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. JMLR, 1996.\n\nA. Krizhevsky, I. Sutskever, , and G. E. Hinton. Imagenet classification with deep convolutional neural net-\nworks. In Proc. NIPS, 2012.\n\nS. Lange and M. Riedmiller. Deep auto-encoder neural networks in reinforcement learning. In Proc. Int. Jt.\nConf. Neural. Netw., 2010.\n\nY. LeCun, Y. Bengio, and G. E. Hinton. Deep learning. Nature, 2015.\n\nL.-J. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine\nLearning, 1992.\n\nV. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing Atari\nwith Deep Reinforcement Learning. In NJPS Deep Learning Workshop, 2013.\n\nV. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K.\nFidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra,\nS. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 2015.\n\nV. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asyn-\nchronous Methods for Deep Reinforcement Learning. In https://arxiv.org/abs/1602.01783, 2016.\n\nR. Munos, T. Stepleton, A. Harutyunyan, and M. G. Bellemare. Safe and efficient off-policy reinforcement\nlearning. In Proc. NIPS, 2016.\n\nA. Nair, P. Srinivasan, S. Blackwell, C. Alcicek, R. Fearon, V. Panneershelvam A. De Maria, M. Suleyman,\nC. Beattie, S. Petersen, S. Legg, V. Mnih, K. Kavukcuoglu, and D. Silver. Massively Parallel Methods for\nDeep Reinforcement Learning. In https://arxiv.org/abs/1507.04296, 2015.\n\nI. Osband, C. Blundell, A. Pritzel, and B. Van Roy. Deep Exploration via Bootstrapped DQN. In\nhttp://arxiv.org/abs/1602.04621, 2016.\n\nW. P. Powell. Approximate Dynamic Programming. Wiley, 2011.\n\nM. Riedmiller. Neural fitted Q iteration - first experiences with a data efficient neural reinforcement learning\nmethod. In Proc. ECML, 2005.\n\nT. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized Experience Replay. In Proc. ICLR, 2016.\n\nD. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou,\nV. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap,\nM. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of Go with deep neural\nnetworks and tree search. Nature, 2016.\n\nI. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Proc. NIPS,\n2014.\n\nR. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.\n\nS. Thrun and A. Schwartz. Issues in using function approxima- tion for reinforcement learning. In Proc.\nConnectionist Models Summer School, 1993.\n\nJ.N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function approximation. 1997.\nH. van Hasselt. Double Q-learning. In Proc. NIPS, 2010.\n\nH. van Hasselt, A. Guez, and D. Silver. Deep Reinforcement Learning with Double Q-learning. In\nhttps://arxiv.org/abs/1509.06461, 2015.\n\nZ. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas. Dueling Network Architectures\nfor Deep Reinforcement Learning. In https://arxiv.org/abs/1511.06581, 2015.\n\nC. J.C. H. Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989.\nC. J.C. H. Watkins and P. Dayan. Q-learning. Machine Learning, 1992.\n\nP. Wawrzynski. Real-time reinforcement learning by sequential actor-critics and experience replay. Neural\nNetworks, 2009."}, {"section_index": "9", "section_name": "OPTIMALITY TIGHTENING FOR STOCHASTIC ENVIRONMENTS", "section_text": "Similar to the inequalities we obtained for deterministic environments, we can also derive the fol.\nlowing sequence of inequalities holds for the optimal action-value function Q* (with the greedy\npolicy), under the expectation of the environmental dynamics:\nSo we have the following expectation constraint, on trajectories from state s; and action a;\nE[Q*(s;,a;) \u2014 Lyx] >|\nKR\nE[Q*(sj,4;) \u2014 (Q*1Q*(8j-4-a, aj\u2014n\u20141) \u2014 So je -14a)] <\n\ni=0\nWith these expectation constraints, we can formulate a constrained optimization problem as follows\nApplying the quadratic penalty function method, we obtain the objective\nPlease note that here we provide a mathematical derivation of our approach for stochastic environ:\nments. We expect that it would work in practice, but due to time constraints and the lack of gooc\nstochastic simulators, we cannot provide any empirical results here.\n2\n\nIV\n\nEfr; + ymax Q*(sj+1,4)]\n\nk\nEl) yrj4i th max Q\"(s}+441;@)]\ni=0\nk\n\nE[Q* (83,43) \u2014 (So yirige th max Q*(8j+441,0))] 2 0\ni=0\nWe can also use this series of inequalities to define upper bounds, on trajectories to state s; and\naction a;.\nE[Q*(s;,a;) \u2014Uj%] <0\nmin\n0\n\nSS (Qo(sj,.43) = ys)\u201d\n\n(sj,07,8j41,79)\u20acB\n\nmin, E[Qo(s;,a;) \u2014 Lyn\nmax, E[Qoe(s;,a;) \u2014 Uj,\nSS [leetsy. a5) = a5)? + Alma BLL 4 \u2014 Qols5.45)]4 max El(Qols;.45) ~U5a)18)\n\n(s;,05,7j,8341)\u20acB\nBy applying the Jensen\u2019s inequality, we are able to obtain an upper bound by first exchanging the\nexpectation with the max and then exchanging the expectation with the rectifier function, because\nboth the max function and the rectifier function are convex.\nSS __|@r(si.05) ay)? + BlNangx Le ~ Qolsjoa5)8] + BINQn(s).45) ~ max .)2)\n\n550557758541) \u20acB\n[t is easy to see that, since we have trajectory samples in the replay memory which were drawn\nunder the environmental dynamics, we can perform stochastic optimization using these trajectories.\nIn this way, a sample of this upper bound is identical to that in the deterministic setting in Eq. (4).\nAs a result, our proposed algorithm can be used to optimize an upper bound of the above constrained\noptimization in stochastic environments.\nTable S1: Raw Scores across 49 games, using 30 no-op start evaluation (5 minutes emulator time\n18000 frames, \u00a2 = 0.05). Results of DQN is taken from Mnih et al. (2015)\nWe present our quantitative results in Table S1 and Table $2. We also illustrate the normalized score\nprovided in Eq. (6) over the number of episodes in Fig. S1.\nsame\n\nAlien\n\n\\midar\nAssault\nAsterix\nAsteroids\n\\tlantis\n\n3ank Heist\n3attle Zone\n3eam Rider\n30wling\n3oxing\n3reakout\nSentipede\n\u201chopper Command\nSrazy Climber\nDemon Attack\nDouble Dunk\nnduro\n\nshing Derby\n\u2018reeway\n\u2018rostbite\nsopher\nsravitar\nLE.R.O\n\nce Hockey\namesbond\n<angaroo\n<rull\n\n<ung-Fu Master\nViontezuma\u2019s Revenge\nMis. Pacman\nName This Game\n-ong\n\nrivate Eye\n)*Bert\n\nRiver Raid\nRoad Runner\nRobotank\nseaquest\n\nspace Invaders\nstar Gunner\nfennis\n\nrime Pilot\nfutankham\n\nJp and Down\nVenture\n\nVideo Pinball\nVizard of Wor\nZaXxon\n\nRandom Human\n\n227.80 6875\n5.8 1676\n222.4 1496\n210 8503\n719.1 13157\n12850 29028\n14.2 734.4\n2360 37800\n363.9 5775\n23.1 154.8\n0.1 43\n1.7 31.8\n2091 11963\n811 9882\n10781 35411\n152.1 3401\n-18.6 -15.5\n0 309.6\n-91.7 5.5\n0 29.6\n65.2 4335\n257.6 2321\n173 2672\n1027 25763\n-11.2 0.9\n29 406.7\n52 3035\n1598 2395\n258.5 22736\n0 4376\n307.3 15693\n2292 4076\n-20.7 9.3\n24.9 69571\n163.9 13455\n1339 13513\n11.5 7845\n2.2 11.9\n68.4 20182\n148 1652\n664 10250\n-23.8 -8.9\n3568 5925\n11.4 167.6\n533.4 9082\n0 1188\n16257 17298\n563.5 4757\n32.5 9173\n\nDQN 200M\n3069\n739.5\n3359\n6012\n1629\n85641\n429.7\n26300\n6846\n42.4\n71.8\n401.2\n8309\n6687\n114103\n9711\n-18.1\n301.8\n-0.8\n30.3\n328.3\n8520\n306.7\n19950\n-1.6\n576.7\n6740\n3805\n23270\n0\n2311\n7257\n18.9\n1788\n10596\n8316\n18257\n51.6\n5286\n1976\n57997\n-2.5\n5947\n186.7\n8456\n380\n42684\n3393\n4977\n\nOurs 10M\n1864\n565.67\n5142.37\n5408.33\n1481.67\n316766.67\n596\n30800\n8069\n49.3\n81.17\n229.79\n4470.06\n6360\n114146\n5738.67\n-10.07\n672.83\n5.27\n31.3\n3974.11\n4660\n346.67\n19975\n-3.43\n1088.33\n11716.67\n9461.1\n27820\n23.33\n1805\n7314.67\n19.4\n342.37\n12355\n8028.33\n29346.67\n34.5\n4070\n995\n16653.95\n-1\n5423.33\n232\n14406\n286.67\n74873.2\n4716.67\n10598\nGame\n\nAlien\n\nAmidar\nAssault\nAsterix\nAsteroids\nAtlantis\n\nBank Heist\nBattle Zone\nBeam Rider\nBowling\nBoxing\nBreakout\nCentipede\nChopper Command\nCrazy Climber\nDemon Attack\nDouble Dunk\nEnduro\nFishing Derby\nFreeway\nFrostbite\nGopher\nGravitar\nH.E.R.O\n\nIce Hockey\nJamesbond\nKangaroo\nKrull\n\nKung-Fu Master\nMontezuma\u2019s Revenge\nMs. Pacman\nName This Game\nPong\n\nPrivate Eye\nQ*Bert\n\nRiver Raid\nRoad Runner\nRobotank\nSeaquest\nSpace Invaders\nStar Gunner\nTennis\n\nTime Pilot\nTutankham\nUp and Down\nVenture\n\nVideo Pinball\nWizard of Wor\nZaxxon\n\nDQN 200M\n42.74%\n43.93%\n246.27%\n69.96%\n7.32%\n449.94%\n57.69%\n67.55%\n119.79%\n14.65%\n\n1707.14%\n1327.24%\n62.99%\n64.78%\n419.50%\n294.22%\n16.13%\n97.48%\n93.52%\n102.36%\n6.16%\n400.43%\n5.35%\n76.50%\n79.34%\n145.00%\n224.20%\n276.91%\n102.38%\n0%\n13.02%\n278.31%\n132%\n2.54%\n78.49%\n57.31%\n232.92%\n509.28%\n25.94%\n121.54%\n598.10%\n142.95%\n100.93%\n112.23%\n92.68%\n31.99%\n2538.62%\n67.47%\n54.09%\n\nOurs 10M\n24.62%\n33.52%\n386.31%\n62.68%\n6.13%\n\n1878.60%\n80.78%\n80.25%\n142.39%\n19.89%\n\n1930.24%\n757.77%\n24.10%\n61.17%\n419.67%\n171.95%\n275.16%\n217.32%\n99.76%\n105.74%\n91.55%\n213.36%\n6.95%\n76.60%\n64.22%\n280.47%\n391.04%\n986.59%\n122.62%\n0.53%\n9.73%\n281.54%\n133.67%\n0.46%\n91.73%\n54.95%\n374.48%\n332.99%\n19.90%\n56.31%\n166.81%\n153.02%\n78.72%\n141.23%\n162.38%\n24.13%\n\n5630.76%\n99.04%\n115.59%\nNormalized Score in Percentage (%)\n\n350\n\n300\n\n250\n\n200\n\n150\n\n100\n\n50\n\n\u2014 Our 10M Mean\n\u2014 Our 10M Median\n\nNature 200M Mean\n\nNature 200M Median\n\n4 6\nTraining frames (1e6)\nFigure S1: Convergence of mean and median of normalized percentages on 49 games"}]
By1snw5gl
[{"section_index": "0", "section_name": "L-SR1: A SECOND ORDER OPTIMIZATION METHOD\nFOR DEEP LEARNING", "section_text": "Vivek Ramamurthy\nvivek.ramamurthy@sentient.ai\nWe describe L-SR1, a new second order method to train deep neural networks.\nSecond order methods hold great promise for distributed training of deep net-\nworks. Unfortunately, they have not proven practical. Two significant barriers to\ntheir success are inappropriate handling of saddle points, and poor conditioning\nof the Hessian. L-SR1 is a practical second order method that addresses these\nconcerns. We provide experimental results showing that L-SR1 performs at least\nas well as Nesterov\u2019s Accelerated Gradient Descent, on the MNIST and CIFAR10\ndatasets. For the CIFAR10 dataset, we see competitive performance on shallow\nnetworks like LeNet5, as well as on deeper networks like residual networks. Fur-\nthermore, we perform an experimental analysis of L-SR1 with respect to its hyper-\nparameters to gain greater intuition. Finally, we outline the potential usefulness of\nL-SR1 in distributed training of deep neural networks."}, {"section_index": "1", "section_name": "1 MOTIVATION", "section_text": "Second order methods hold great potential for distributing the training of deep neural networks\nDue to their use of curvature information, they can often find good minima in far fewer steps thar\n\nfirst order method:\n\nmethods can benefit from larger mini-batches (Le\n\nderivatives via dif\nvariance, so that w\na different trade-o:\n\nerences between estimated gradients. The\nhen we take their differences, the result has\nf between number of steps and mini-batch\n\ns such as stochastic gradient descent (SGD). Moreover, stochastic second orde:\n. This is because they estimate seconc\n\ngradient estimates need to have les:\now variance. As a result they provide\nsize than do SGD-like methods. Thi:\n\ntrade-off is interesting, because while steps must be evaluated sequentially, a mini-batch may be\nevaluated in parallel. Thus, second order methods present an opportunity to extract more parallelism\n\nin neural network\n\nmay be distribute\n\n. Furthermore, there are relatively fewer hy\n\nmethods, compared to variants of stochastic gradient descent.\n\ntraining. In particular, when mini-batches are sufficiently large, their evaluatior\n\nperparameters to tune in second orde:\nL-BFGS (Nocedal} 1980} [Liu & Nocedal 1989) is perhaps the most commonly used second order\n\nmethod in machine learning. BFGS is a quasi-Newton method that maintains an approximation t\u00a2\nthe inverse Hessian of the function being optimized. L-BFGS is a limited memory version of BFGS\nthat stores the most recent updates to the inverse Hessian approximation and can therefore be usec\npractically for large scale problems. L-BFGS is typically combined with a line search technique t\u00a2\nchoose an appropriate step size at each iteration. L-BFGS has been used to good effect in conve\u00bb\noptimization problems in machine learning, but has not found effective use in large scale non-conve>\nproblems such as deep learning.\nThree critical weaknesses have been identified. First, we know that training deep neural networks\ninvolves minimizing non-convex error functions over continuous, high dimensional spaces. It has\nbeen argued that the proliferation of saddle points in these problems presents a deep and profound\ndifficulty for quasi-Newton optimization methods (Dauphin et al.||2014). Furthermore, it has been\nargued that curvature matrices generated in second order methods are often ill-conditioned, and\nthese need to be carefully repaired. A variety of approaches to this have been suggested, including\nthe use of an empirical Fisher diagonal matrix (Martens| {2016). Finally, popular quasi-Newtor\nNigel Duffy"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "approaches, such as L-BFGS (in their default form), require line search to make parameter update:\nwhich requires many more gradient and/or function evaluations.\nWe propose L-SR1, a second order method that addresses each of these concerns. SR1 (Symmetric\nRank One) is a quasi-Newton method that uses a rank one update for updating the Hessian approx\nimation of the function being optimized (2006). Unlike BFGS, the SR1 updat\ndoes not guarantee positive definiteness of the updated matrix. This was considered a major problen\nin the early days of nonlinear optimization when only line search iterations were used, and possibl}\nled to the obscurity of SR1 outside the optimization community. However, with the development o\ntrust-region methods, the SR1 updating formula is potentially very useful, and its ability to generat\nindefinite Hessian approximations can actually prove to be advantageous.\nTwo other insights make L-SR1 practical by removing the requirement for a line search and ad-\ndressing the conditioning problem. First, we replace the line search using a trust region approach.\nWhile L-BFGS using line search is well studied, recently, an L-BFGS method that uses a trust-\nregion framework has also been proposed (2008). Second, we combine L-SR1 with\nbatch normalization. Batch normalization is a technique of normalizing inputs to layers of a neural\nnetwork, used to address a phenomenon known as internal covariate shift during training\n. Our hypothesis is that batch normalization may cause parameters of a neural net-\nwork suitably scaled so that the Hessian becomes better conditioned. We tested this hypothesis\n\nempirically and outline the results below.\n' The reference|Brust et al. (2016) describes an approach to solve the trust region sub-problem encountere\u00ab\nin an L-SR1 method, but does not describe the L-SR1 method itself.\nWe believe that it is possible to overcome saddle points using ran\n\n-one update based second order\n\nmethods. The more common rank-two methods, e.g. L-BFGS, maintain a positive definite approx-\n\nimation to the inverse of the Hessian, by design (Nocedal & Wright} |2006). At saddle-points, the\n\ntrue Hessian cannot be well approximated by a positive definite\n\nsecond order methods to go uphill (Dauphin et al.|/2014). On the\n\nsuch as SR1 don\u2019t maintain this invariant, so they can go downhil\n\nmatrix, causing commonly used\nother hand, rank-one approaches\nat saddle points. Numerical ex-\n\nperiments 1991) suggest that the approximate Hessian matrices generated by the SRI\nmethod show faster progress towards the true Hessian than those generated by BFGS. This suggests\n\nthat a limited memory SR1 method (L-SR1, if you like) could potentially outperform L-BFGS in\nthe task of high dimensional optimization in neural network training. The building blocks needed to\n\nconstruct an L-SR1 method have been suggested in the literature\n\nByrd et al.||/1994}|Khalfan et al.\n\n1993). To the best of our knowledge, however, there is no complete L-SR1 method previously de-\n\nscribed in the literature ['] This prompted us to develop and test\nlarge scale non-convex problems that arise in deep learning.\n\nthe approach, specifically in the\nWe now briefly summarize some other second order approaches that have been suggested in the\nliterature, in order to place our approach in context. derived a technique that\ndirectly calculated the product of the Hessian with an arbitrary vector, and applied this technique to\na few variants of backpropagation, thereby showing a way to use the full Hessian without needing to\ncompute and store it. |M. used a generalization of this technique, introduced by|Schrau-\n(2002), to develop a second order optimization method based on the \u201cHessian-free\u201d approach:\nusing it to train deep auto-encoders (Mari ), as well as recurrent neural networks (Mart\n\n2011). The \u201cHessian-free\u201d approach is essentially a line search Newton-CG (Conj\ngate Gradient) method, also known as the truncated Newton method (Nocedal & Wright\n\n), in\nwhich the search direction is computed by applying CG to the Newton method, and terminating it\nonce it has made sufficient progress. This approach differs from ours in its use of line search instead\nof a trust region method. Moreover, it computes Hessian-vector products using finite differencing,\nas opposed to the limited-memory symmetric rank one update with trust region method, used in our\napproach. The cost of skipping the Hessian calculation in a truncated Newton method is one ad-\nditional gradient evaluation per CG iteration (Nocedal & Wright] {2006}. As mentioned previously,\n(Dauphin et al.|(2014) argue, that in high dimensional problems of practical interest, the proliferation\nof saddle points poses greater difficulty than local minima. In a bid to escape these saddle points,\nthey propose second order optimization method called the saddle-free Newton method. Key to this\napproach is the definition of a class of generalized trust region methods. This class extends classica.\ntrust region methods in a couple of ways. A first order Taylor expansion of the function is mini:\nmized, instead of the second order Taylor expansion. Moreover, the constraint on the step norm is\nreplaced by generalized constraint on the distance between consecutive iterates. Our approach, by\ncontrast, uses a a classical trust-region method. Rather than compute the Hessian exactly, Dauphir\n\nuse an approach similar Krylov subspace descent (Vinyals & Povey}|2012). The func-\n\ntion is optimized in a lower-dimensional Krylov subspace, which is determined through Lanczos\n\niteration of the Hessian (Vinyals & Povey (2012). The Lanczos method may be considered a gen.\n\neralization of the CG method that can be applied to indefinite systems, and may be used to aid the\nCG method by gathering negative curvature information The Lanczos\nmethod also involves finding an approximate solution to a trust-region subproblem in the range of \u00a2\nKrylov basis that it generates. This trust region problem differs from the one we solve, in that the\nKrylov basis generated has a special structure due to its mapping to a tridiagonal matrix (Nocedal &\n\nWright][2006).\nIt is worth noting that several approaches have been proposed to overcome the weaknesses of L-\nBFGS. First, it has been proposed to initialize L-BFGS with a number of SGD steps. However, this\ndiminishes the potential for parallelism (Dean et al} 2012} Le et al.| 2011). Second, it has been\nproposed to use \u201cforgetting\u201d, where every few (say, for example, 5) steps, the history for L-BFGS is\ndiscarded. However, this greatly reduces the ability to use second order curvature information. There\nhas also been a recent spurt of work on stochastic quasi-Newton methods for optimization.\n016) propose a stochastic quasi-Newton method which uses the classical L-BFGS formula,\nbut collects curvature information pointwise, at regular intervals, through sub-sampled Hessian vec-\n\ntor products, rather than at every iteration. |Mokhtari & Ribeiro) (2014) propose RES, a regularized\n\nstochastic version of BFGS to solve convex optimization problems with stochastic objectives, and\nprove its convergence for bounded Hessian eigenvalues. propose an on-\nline L-BFGS method for solving optimization problems with strongly convex stochastic objectives,\nand establish global almost sure convergence of their approach for bounded Hessian eigenvalues of\nsample functions. In the case of nonconvex stochastic optimization, [Wang et al.| (2014) propose,\nbased on a general framework, two concrete stochastic quasi-Newton update strategies, namely\nstochastic damped-BFGS update and stochastic cyclic Barzilai-Borwein-like update, to adaptively\ngenerate positive definite Hessian approximations. They also analyze the almost sure convergence\nof these updates to stationary points. [Keskar & Berahas](@ 5) propose ADAQN, a stochastic quasi-\nNewton algorithm for training RNNs. This approach retains a low per-iteration cost while allowing\nor non-diagonal scaling through a stochastic L-BFGS updating scheme. The method also uses a\nnovel L-BFGS scaling initialization scheme and is judicious in storing and retaining L-BFGS cur-\nvature pairs. Finally, 12016) proposes a variable-metric algorithm for stochastic nonconvex\noptimization which exploits fundamental self-correcting properties of BFGS-type updating, and uses\nit to solve a few machine learning problems. As one may notice, all of these approaches adapt the\nBFGS-style rank two updates in different ways to solve convex and non-convex problems. In con-\ntrast, our approach uses SR1-type updates, which we think can help better navigate the pathological\nsaddle points present in the non-convex loss functions found in deep learning, by not constraining\nthe Hessian approximation to be positive definite, as in the case of BFGS-style updates. Comparison\nof our approach with one of these recent stochastic second order methods is an interesting next step.\nIn the Appendix, we provide a brief primer on line search and trust region methods, as well as on\n\nniac.Naurtan mathade and thair limited mamaruyu varante\n\nE\nOur algorithm is synthesized as follows. We take the basic SR1 algorithm described in/Nocedal &|\n[Wright] (2006) (Algorithm 6.2), and represent the relevant input matrices using the limited-memory\nrepres ns described in|Byrd et al.|(1994). The particular limited-memory representations used\nin the algorithm vary, depending on whether we use trust region or line search methods as sub-\nroutines to make parameter updates, as does some of the internal logic. For instance, if k updates\n\nare made to the symmetric matrix Bo using the vector pairs Ais sar and the SR1 formula, the\n\nis Vi\nresulting matrix B;, can be expressed as (Nocedal & Wright]/2006|\nBy = Bo + (Yr \u2014 BoSk)(Dr + Le + Li, \u2014 Si BoSr) \"(Ve \u2014 BoSk)\u201d\nwhere S},, Y;, Dz, and L;, are defined as follows:\nOk = [S077 Sk\u2014-1], NAY, = [Yor 5 Yk-1\na) Si-1yj-1 ft > J\n(Le)ig 0 otherwise\nDy = diaglso'yo, ++ . $f\u20141Yk\u20141)\nThe self-duality of the SR1 method allows the inverse formula H;, tc\nbe obtained simply by replacing B, s, and y by H, y, and s, respectively, using standard matrix\nidentities. Limited-memory SR1 methods can be derived exactly like in the case of the BFGS\nmethod. Additional details are present in the pseudocode provided in the Appendix. The algorithrr\nwe develop is general enough to work with any line search or trust region method. While we testec\nthe algorithm with line search approaches described in|Dennis Jr. & Schnabel (1983), and wit\nthe trust region approach described in|Brust et al.](2016), in this paper, we focus our experimenta\n\ninvestigations on using the trust region approach, and the advantage that provides over using othe:\nfirst and second order ontimization methods.\nWe also make a note here about the space and time complexity of our algorithm. We respectively\ndenote by m and n, the memory size, and parameter dimensions. We assume m << n. As dis-\ncussed in Section 7.2 of (2006), the limited-memory updating procedure of By\nrequires approximately 2mn + O(m\u201d) operations, and matrix vector products of the form B;,,v can\nbe performed at a cost of (4m + 1)n + O(m?) multiplications. Moreover, the Cholesky and eigen-\nvalue decompositions we perform within our trust-region method for m x m matrices require O(m*)\noperations. It follows quite easily\u2019|from this that the space complexity of our algorithm is O(mn),\nand the per iteration time complexity of our algorithm is O(mn)."}, {"section_index": "3", "section_name": "4 EXPERIMENTS", "section_text": "In the following, we summarize the results of training standard neural networks on the MNIST anc\nCIFAR10 datasets using our approach, and benchmarking the performance with respect to othe\nfirst and second order methods. First, we compared our L-SR1 (with trust region) approach, with\nNesterov\u2019s Accelerated Gradient Descent (NAG), L-BFGS with forgetting every 5 steps, defaul\nSGD, AdaDelta, and SGD with momentum, by training small standard networks on the MNIST anc\nCIFAR10 datasets. On these problems, we also studied the effect of varying the minibatch size, for\nL-SR1, Adam (Kingma & Ba} {2014}, and NAG. Next, we compared our L-SR1 with trust regior\napproach with default hyperparameters, with a benchmark SGD with momentum, and Adam, by\ntraining a 20-layer deep residual network on the CIFAR10 dataset. Following that, we varied eact\nhyperparameter of the L-SR1 with trust region approach to observe its effect on training the residua\nnetwork on CIFAR1O."}, {"section_index": "4", "section_name": "4.1 LENET-LIKE NETWORKS", "section_text": "For each approach, and for each dataset, we considered the case where our networks had batch\nnormalization layers within them, and the case where they did not. The parameters of the networks\nwere randomly initialized. All experiments were repeated 10 times to generate error bars."}, {"section_index": "5", "section_name": "4.1.1 MNIST", "section_text": "We considered the LeNet5 architecture in this case, which comprised 2 convolutional layers, fol-\nlowed by a fully connected layer and an outer output layer. Each convolutional layer was followed\nby a max-pooling layer. In the case where we used batch-normalization, each convolutional and\nfully connected layer was followed by a spatial batch normalization layer. We used a mini-batch\nsize of 20 for the first order methods like NAG, SGD, AdaDelta and SGD with momentum, and a\nmini-batch size of 400 for the second order methods like L-SR1 and L-BFGS. The memory size was\nset to 5 for both L-SR1 and L-BFGS. The networks were trained for 20 epochs. Further details on\nthe network architecture and other parameter settings are provided in the Appendix.\n\u201cDeep neural networks typically have paramater dimensions in the tens of millions, while the memory siz\ntypically does not exceed 10. So n is indeed several orders of magnitude larger than m.\nLoss\n\n0.024\n\n0.022\n\n0.020\n\n0.018\n\n0.016)\n\n0.014\n\n0.012\n\n0.010\n\n0.008\n\n0.006\n0\n\nMNIST with batch normalization\n\n0.09\n\nHac\n\n4 esr\n\n#4 L-6FGS with forgetting\n4 scp\n\n+4 AdaDelta\n\n+41 SGD with momentum\n\n0.08\n\n0.07\n\n0.06\n\nLoss\n\nEpoch index\n\n20\n\nMNIST without batch normalization\n\nNAG\n\nL-SR1\n\nL-BFGS with forgetting\nSGD\n\nAdaDelta\n\nSGD with momentum,\n\nDID IL\n\n0.00\n0\n\n10 15\nEpoch index\nFigure 1: Variation of test loss with number of epochs, on the MNIST dataset, with and without\nbatch normalization. Note that the scales on the y-axes are different."}, {"section_index": "6", "section_name": "4.1.2 CIFARIO", "section_text": "We considered a slight modification to the \u2018LeNet5\u2019 architecture described above. We used a mini-\nbatch size of 96 for NAG, SGD, AdaDelta and SGD with momentum. The other mini-batch sizes\nand memory sizes for L-SR1 and L-BFGS were as above. As above, the networks were trained for\n20 epochs. Further details on the network architecture and other parameter settings are provided in\nthe Appendix.\n0.50\n\n0.45\n\n0.40\n\n$ 035\n\n0.30\n\n0.25\n\n0.20\n0\n\nCIFAR1O with batch normalization\n\n08\n\nCIFAR1O without batch normalization\n\nHH ONAG\n4 esr\n#4 L-6FGS with forgetting\n4 scp\n\n07\n\n0.6\n\n04\n\n0.3\n\nNAG\nL-SR1\nL-BFGS with forgetting\n\nH\nH\n+ sop\nH\ni\n\nr\n\nAdaDelta\nSGD with momentum\n\nEpoch index\n\n20\n\n0.2\n0\n\nEpoch index\nFigure 2: Variation of test loss with number of epochs, on the CIFAR10 dataset, with and without\nbatch normalization. Note that the scales on the y-axes are different."}, {"section_index": "7", "section_name": "4.1.3 VARIATION OF MINIBATCH SIZE", "section_text": "We also compared the variation of test loss between L-SR1, Adam and NAG, as we varied th\nmini-batch size from 500 to 1000 to 10000, in the presence of batch normalization. The networl\narchitectures were as above. For minibatch sizes 500 and 1000, we trained the networks for 5\nepochs, while for the minibatch size of 10000, the networks were trained for 200 epochs.\nLoss\n\nMNIST with batch normalization: Minibatch size 500\n\nMINIST with batch normalization: Minibatch size 1000,\n\n24 HNIST with batch normalization Minibath size 10000\n\non ane\nH MG H me\nani His ond\ncon\ncon\ncoo}\n8 ono]\ncoos 3\nc.ne| \\ aa\nH ms\ncoor cng} $4 ESRI\n4 Adam\ncng} cna!\n\u00bb 6 \u00a9 S 8 6 a 8 so 6 ww 2 Ww i 2\npach index pach index Epoch index\n\n200\nCIFAR with batch normalization: Minibatch size 500\n\nCIFAR with batch normalization: Minibatch size 1000\n\nCIFAA with batch normalization: Minibatch size 10000\n\nom so\nH we H MG\n032 sal\na Adam\n030\n2028 Hy\n026\n02s\n02 0m\n\u00bb 6 \u00a9 S 8 6 5 * 5 8 6 wD w mw eo 2\u00bb\n\nEpoch index\n\nEpoch index\n\nEpoch index\nCIFAR with batch normalization: Minibatch size 500\n\nCIFAR with batch normalization: Minibatch size 1000\n\nCIFAA with batch normalization: Minibatch size 10000\n\nom so\nHin\n032 sal\na Adam\n030\n2028 Hy\n026\n02s\n02 0m\n\u00bb 6 \u00a9 S 8 6 5 * 5 8 6 wD w mw eo 2\u00bb\n\nEpoch index\n\nEpoch index\n\nEpoch index\nFigure 4: Variation of test loss with number of epochs, on the CIFAR10 dataset, with batch nor-\nmalization, for varying minibatch sizes. Note that the scales on the x and y-axes across figures are\ndifferent."}, {"section_index": "8", "section_name": "4.1.4 DISCUSSION", "section_text": "Our first set of experiments (Figures[I] suggest that L-SR1 performs as well as, or slightly better\nthan all the first order methods on both the MNIST and CIFAR10 datasets, with or without batch\nnormalization. L-SR1 is substantially better than L-BFGS in all settings, with or without forgetting.\nForgetting appears to be necessary in order to get L-BFGS to work. Without forgetting, the approach\nappears to be stuck where it is initialized. For this reason, the plots for L-BFGS without forgetting\nhave not been included. Batch normalization appears to improve the performance of all approaches,\nparticularly the early performance of second order approaches like L-SR1 and L-BFGS.\nThe experiments with variation of minibatch sizes (Figures , seem to provide compelling evi-\ndence of the potential for distributed training of deep networks, as may be seen from Table[I] First\nwe note that first order methods like NAG are not as sensitive to size of the minibatch, as commonly\nunderstood. For example, a 20 fold increase in minibatch size did not decrease the speed of conver.\ngence by the same or higher order of magnitude. Furthermore, approaches like L-SR1 and Adam\nappear to be much less sensitive to increasing minibatch size than NAG. This strengthens the case\nfor their application to distributed training of deep neural networks. Finally, while Adam makes\nmuch faster initial progress than the other approaches, its final test loss by the end of training is\nworse than in the case of L-SR1.\nOne of the limitations of SR1 updating is that the denominator in the update can vanish. The liter\nature however suggests that this happens rarely enough that the updates can be skipped when thi:\nphenomenon occurs, without affecting performance. In this regard, we had some interesting obser-\nvations from our experiments. While in most cases, updates were either never skipped, or skippec\nless than 2.5% of the time, the cases of MNIST training with batch normalization, yielded abnor:\nFigure 3: Variation of test loss with number of epochs, on the MNIST dataset, with batch normal-\nization, for varying minibatch sizes. Note that the scales on the x and y-axes across figures are\ndifferent.\nTable 1: Speed of convergence of NAG, L-SR1, and Adam, with varying minibatch sizes.\nmally high levels of skipped updates, ranging all the way from 7% to higher than 60% (for minibatch\nsize 10000). While this did not seem to affect performance adversely, it certainly warrants future\ninvestigation. Moreover, a better understanding of the interplay between batch normalization and\noptimization could help inform potential improvements in optimization approaches."}, {"section_index": "9", "section_name": "4.2 RESIDUAL NETWORKS", "section_text": "We next considered a deeper residual network architecture described in section 4.2 of |He et al.\n(2015b), with n = 3. This led to a 20-layer residual network including 9 shortcut connections. As in\n\n(2015b), we used batch normalization (Ioffe & Szegedy}|2015) and the same initialization\nmethod (He et al.||2015a).\nWe trained the residual network using the benchmark SGD with momentum, and other paramete:\nsettings as described in|He et al.|(2015b). We also trained the network using L-SR1 with defaul\nsettings. These included, a memory size of 5, a trust-region radius decrease factor of 0.5, and\na trust-region radius increase factor of 2.0. Finally, we also compared with Adam, with default\nsettings (Kingma & Bal |2014). We used the same mini-batch size of 128 for all algorithms. Basec\non the learning rate schedule used, the learning rate was equal to 0.1 through the first 80 epochs.\n0.01 up to 120 epochs, and 0.001 thereafter, for SGD with momentum. Figure [5] shows variatior\nof test loss, over epochs, and by time. It needs to be noted that default L-SR1, with no parameter\ntuning at all, has a superior final test loss to Adam, and is competitive with SGD with momentum\nwhich used custom parameters that were tuned carefully. L-SR1 does make slower progress ove!\ntime, which can be further optimized. Finally, we note that the test loss for L-SR1 bounces arounc\na lot more than the test loss for the other algorithms. This bears further exploration."}, {"section_index": "10", "section_name": "4.2.2. VARIATION OF L-SR1 HYPERPARAMETERS", "section_text": "We varied the hyperparameters of L-SR1 in turn, keeping the remaining fixed. In each case, we\ntrained the network for 200 epochs. We first considered varying the increase and decrease factors\ntogether. We considered a trust-region radius decrease factor of 0.2, 0.5 and 0.8, and a trust-region\nradius increase factor 1.2 and 2.0. The respective default values of these factors are 0.5 and 2.0\nrespectively. This led to six different combinations of decrease and increase factors. We kept the\nmemory size and mini-batch size fixed at 5 and 128 respectively. Next, we considered memory sizes\nof 2 and 10 (in addition to 5, which we tried earlier), keeping the mini-batch size, decrease factor,\nand increase factor fixed at 128, 0.5, and 2.0 respectively. Finally, we considered mini-batch sizes of\n512, 2048 and 8192 (in addition to 128, which we tried earlier), keeping the memory size, decrease\nfactor, and increase factor fixed at 5, 0.5, and 2.0 respectively. Figure |6]shows the results.\nThe following may be noted, based on the experiments with L-SR1 for training a residual network\non CIFAR10. While there is potential value in increasing and decreasing the trust region radius at\ndifferent rates, our experiments suggest that it may not be necessary to tune these hyperparameters.\nThere is no noticeable performance gain from using a higher memory size in L-SR1. Furthermore.\nusing a smaller memory size performs at least as well as in the default case. This is good news, due\nto the consequent savings in storage and computational resources. L-SR1 is relatively insensitive\nto a 4-fold increase in mini-batch size from 128 to 512, and a further 4-fold increase to 2048. The\nminibatch sensitivity of L-SR1 seems to be higher in the case of the residual network, compared\nL-SR1 vs SGD vs Adam on a residual network.\n\n\u00bb*\u2014~ SGD with momentum Test (benchmark)\n\u2014# L-SR1 Test (default)\n+4 Adam Test (default)\n\n0.6F\n\n0.0\n0 50 100 150 200\n\nEpoch index\n\n07\n\n06\n\n05\n\nLoss\n\n00 1 f 1\n0 \u2014 5000 10000 15000 20000 25000 30000 35000 40000 4500\n\nL-SR1 vs SGD vs Adam on a residual network\n\n*\u2014* SGD with momentum Test (benchmark)\n\u00a9 L-SRI Test (default)\n+4 Adam Test (default)\n\nTime in seconds\nFigure 5: LSR1 vs SGD vs Adam, on the CIFAR10 dataset, using a residual network. The x-axis or\nthe left shows number of epochs, while the x-axis on the right shows time in seconds.\n4g E-SR1_- Variation of increase and decrease factors oa SRI - Variation of mini-batch size SRI Variation of memory size\n\nTest Decrease 02, erease-1.2 est minbatch size 128 Test memory ie 2\na + Test: pecrease-02.Ineease-20 as 4 Test: minibatch size 512 a 4 Test: memory sizes\ns-4 Test: pecrease-05, incense -1.2 \u2018Test: miibateh size 2088 4 Test memory ize 10\nsf (4 Test Dacease-05 Increase -20 a 6-6 Tein: minibath ize 6192\nTest Decrease 08, erase -1.2 Ml\n\u2014 Test Decrease 0.8, crease -20\n0s\n7 os)\na\na\na\naa\na Ey rc) ry FE) a Ey rc) ry FE) a Ey rc) ry FE)\n\nEpoch index Epoch index Epoch index\nL-SRI - Variation of increase and decrease factors. L-SR1 - Variation of mini-batch size L-SR1 - Variation of memory size\n\n\u2014 Test: Decrease - 0.2 Increase -12 Test: minibatch size 128 \u2014 Test: memory size 2\n++ Test: Decrease 0.2, Increase -2.0 + Test: inibatch size 512 4 Test: memory size 5\n4 Test: Decrease 0.5, Increase -12 4 Test: minibatch size 2048 4 Test: memory size 10\n\nTest: Decrease -05, Increase -2.0 #4 Train: minioatch size 8192\n\nTest: Decrease 08, Increase -12\n\nTest: Decrease - 0.8, Increase -2.0,\n\n0 Ey 100 io m0 0 Ey 100 io m0 0 Ey 100 io m0\nFigure 6: Variation of trust region radius increase and decrease factors, mini-batch size and memory\nsize with number of epochs, on the CIFAR10 dataset, using a residual network. Note that the scales\non the y-axes are different.\nwith the Le-Net like networks seen earlier. Finally, we found the proportion of skipped updates in\nthe case of residual networks to be less than 0.5% in all cases."}, {"section_index": "11", "section_name": "5 CONCLUSIONS", "section_text": "In this paper, we have described L-SR1, a new second order method to train deep neural networks.\nOur experiments suggest that this approach is at the very least, competitive, with other first order\nmethods, and substantially better than L-BFGS, a well-known second order method. Our experi-\nments also appear to validate our intuition about the ability of L-SR1 to overcome key challenges\nassociated with second order methods, such as inappropriate handling of saddle points, and poor\nconditioning of the Hessian. Our experimentation with the hyperparameters of L-SR1 suggested\nthat it is relatively robust with respect to them, and requires minimal tuning. Furthermore, we have\nevidence to suggest that L-SR1 is much more insensitive to larger minibatch sizes than a first order\nmethod like NAG. This suggests that L-SR1 holds promise for distributed training of deep networks,\nand we see our work as an important step toward that coal."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Johannes Brust, Jennifer B. Erway, and Roummel F. Marcia. On solving |-sr1 trust-region subprob-\nlems. arXiv.org, 8 2016. arXiv: 1506.07222v3.\nRichard H. Byrd, Jorge Nocedal, and Robert B. Schnabel. Representations of quasi-newton matrice:\nand their use in limited-memory methods. Mathematical Programming, 63(1):129-156, 1 1994.\nJohn E. Dennis Jr. and Robert B. Schnabel. Numerical methods for unconstrained optimization ana\nnonlinear equations. Prentice Hall, 1 edition, 1983.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog.\n\nnition. CoRR, abs/1512.03385, 2015b. URL htt Jarxiv.org/abs/1512.03385|\nHumaid Khalfan, Richard H. Byrd, and Robert B. Schnabel. A theoretical and experimental study\nof the symmetric rank one update. SIAM Journal on Optimization, 3(1):1\u201424, 1993.\n\u201crank Curtis. A self-correcting variable-metric algorithm for stochastic optimization. In Proceedings\nof the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA,\n\nJune 19-24, 2016, pp. 632-641, 2016. URL http://jmlr.org/proceedings/papers/\nYann Dauphin, Razvan Pascanu, Caglar Giilcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua\nBengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op-\n\ntimization. CoRR, abs/1406.2572. 2014. URL htt /arxiv.orag/abs/1406.2572\nNitish Shirish Keskar and Albert S. Berahas. adaqn: An adaptive quasi-newton algorithm for training\nmns. CoRR, abs/1511.01169, 2015. URL/http: //arxiv.org/abs/1511.01169\nDong C. Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimizatior\nMathematical Programming, 45(1):503\u2014-528, 1989.\nJames Martens and Ilya Sutskever. Learning recurrent neural networks with hessian-free optimiza-\ntion. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011,\nBellevue, Washington, USA, June 28 - July 2, 2011, pp. 1033-1040, 2011.\nAryan Mokhtari and Alejandro Ribeiro. RES: regularized stochastic BFGS algorithm. [EEE Trans\nSignal Processing, 62(23):6089-6104, 2014. doi: 10.1109/TSP.2014.2357775. URL http:\n//ax.doi.org/10.1109/TSP.2014.2357775\nJorge Nocedal. Updating quasi-newton matrices with limited storage. Mathematics of Computation\n35(151):773-782, 7 1980.\nJorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer-Verlag, New York, 2\nedition, 2006.\nBarak A. Pearlmutter. Fast exact multiplication by the hessian. Neural Computation, 6:147-160.\n1994."}, {"section_index": "13", "section_name": "BACKGROUND", "section_text": "In the following, we provide a brief primer on line search and trust region methods, as well as o1\nquasi-Newton methods and their limited memory variants. Further details may be found in|Noceda\n\ni& Wright] (2006).\nIn any optimization algorithm, there are two main ways of moving from the current point x;, to\na new iterate x, . One of them is line search. In it, the algorithm picks a descent direction p,\nand searches along this direction from the current iterate x; for a new iterate with a lower function\nvalue. The distance a to move along p; can be found by solving the following one-dimensional\nminimization problem:\nInstead of an exact minimization which may be expensive, the line search algorithm generates a\nlimited number of trial step lengths until it finds one that generates a sufficient decrease in function\nJames Martens. Deep learning via hessian-free optimization. In Proceedings of the 27th Interna-\ntional Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pp. 735-\n742. 2010. URLihtteo: //www.icml2010.orgq/papers/458.pdf\nAryan Mokhtari and Alejandro Ribeiro. Global convergence of online limited memory bfgs. J.\nMach. Learn. Res., 16(1):3151-3181, January 2015. ISSN 1532-4435. URL\nacm.org/citation.cfm?id=2789272.2912100\nmin f (xp + ape)\nvalue. At the new point, the process of computing the descent direction and step length is repeated.\nThe other way is to use a trust region method. In a trust region method, the information about f\nis used to construct a model function m;, which is supposed to approximate f near the current\npoint x. Since the model m;, may not approximate f well when 2 is far from 2, the search for\na minimizer of m,, is restricted to some trust region within a radius A; around x;. To wit, the\ncandidate step p approximately solves the following sub-problem:\nmin mx(r, +p),\nP:||p||SAx )\nIf the candidate solution does not produce a sufficient decrease in f, the trust region is considere\ntoo large for the model function to approximate f well. So we shrink the trust region and re-solve\nEssentially, the line search and trust region approaches differ in the order in which they choose th\ndirection and magnitude of the move to the next iterate. In line search, the descent direction p, i\nfixed first, and then the step length a; to be taken along that direction is computed. In trust region, |\nmaximum distance equal to the trust-region radius A,, is first set, and then a direction is determine\nwithin this radius, that achieves the best improvement in the objective value. If such a directior\ndoes not yield sufficient improvement, the model function is determined to be a poor approximatio!\nto the function, and the trust-region radius A;, is reduced until the approximation is deemed goox\nenough. Conversely, as long as the model function appears to approximate the objective functior\nwell, the trust region radi\u2019 increased until the approximation is not good enough."}, {"section_index": "14", "section_name": "LIMITED MEMORY QUASI-NEWTON METHODS", "section_text": "Quasi-Newton methods are a useful alternative to Newton\u2019s method in that they do not require com.\nputation of the exact Hessian, and yet still attain good convergence. In place of the true Hessian\nV? fr, they use an approximation B;,, which is updated after each step based on information gained\nduring the step. At each step, the new Hessian approximation B;,+1 is required to satisfy the follow-\ning condition, known as the secant equation:\nSk = Uk41 \u2014 Le, YR = Vieai \u2014 Vie\nA less well known formula, particularly in the machine learning community, is the symmetric-rank-\none (SR1) formula, defined by\nThe former update is a rank-two update, while the latter is a rank-one update. Both updates satisfy\nthe secant equation and maintain symmetry. The BFGS update always generates positive definite\napproximations whenever the initial approximation Bo is positive definite and s7'y, > 0. Often,\nin practical implementations of quasi-Newton methods, the inverse Hessian approximation H;, is\nused instead of the B;,,and the corresponding update formulae can be generated using the Sherman-\nMorrison-Woodbury matrix identity (Hager|/1989).\nLimited-memory quasi-Newton methods are useful for solving large problems where computation\nof Hessian matrices is costly or when these matrices are dense. Instead of storing fully dense n x n\napproximations, these methods save only a few vectors of length n that capture the approximations.\nDespite these modest storage requirements, they often converge well. The most popular limited\nmemory quasi-Newton method is L-BFGS, which uses curvature information from only the most\nrecent iterations to construct the inverse Hessian approximation. Curvature information from earlier\nBr4isk = Yk\nTypically, By,+1, is also required to be symmetric (like the exact Hessian), and the difference be-\ntween successive approximations B;, and Bj.) is constrained to have low rank. One of the most\npopular formulae for updating the Hessian approximation B;, is the BFGS formula, named after its\ninventors, Broyden, Fletcher, Goldfarb, and Shanno, which is defined by\nT T\nBx sn8, Br | YRVEc\n\nBry = Be- '\nT T\n5;, Brsk Up Sk\n(Ye \u2014 Bask) (Ye \u2014 Bese)\u201d\n\n(yz \u2014 Brsz)? sx\n\nBroa By 4\niterations, which is less likely to be useful to modeling the actual behavior of the Hessian at the\ncurrent iteration, is discarded in order to save memory.\nLimited-memory quasi-Newton approximations can be used with line search or trust region methods\nAs described in (1994), we can derive efficient limited memory implementations 0\nseveral quasi-Newton update formulae, and their inverses.\nNETWORK ARCHITECTURES AND HYPERPARAMETER SETTINGS"}, {"section_index": "15", "section_name": "MNIST", "section_text": "The layers of the LeNet5 architecture used, are described below. All the batch normalization layer\nwere removed, in the \u2018without batch normalization\u2019 case.\nAdditionally, the network was trained with L2 regularization with parameter 0.0001. Training loss\nwas measured as softmax cross entropy, while test loss was measured as multi-class error count.\nIn the case of the first order methods, the learning rate was set to 0.003 where needed, and the\nmomentum was set to 0.9, where needed. AdaDelta did not take any parameters."}, {"section_index": "16", "section_name": "CIFARI0", "section_text": "The layers of the architecture used, are described below. All the batch normalization layers were\nremoved, in the \u2018without batch normalization\u2019 case.\nConvolutional Layer - filter size 5 x 5, 20 feature maps, stride 1, padding 0, and a ReLU\nactivation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.1\n\nSpatial Batch Normalization Layer\nMax Pooling Layer - filter size 2\n\nConvolutional Layer - filter size 5 x 5, 50 feature maps, stride 1, padding 0, and a ReLU\nactivation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.1\n\nSpatial Batch Normalization Layer\n\nMax Pooling Layer - filter size 2\n\nFully Connected Layer - 500 hidden units, and a tangent hyperbolic activation function\nSpatial Batch Normalization Layer\n\nOuter Output Layer - 10 outputs and output standard deviation of 0.1\nConvolution:\n\nal Layer -\n\nfilter size 5 x 5, 32 feature maps, stride 1, padding 2, and a ReLU\n\nactivation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.01\n\nSpatial Batch Normalization Layer\n\nMax Pooling Layer - fi\n\nter size 2\n\nActivation Layer - ReLU activation function with bias 0 and Gaussian noise with mean 0\nand standard deviation 0.1\n\nConvolution:\n\nal Layer -\n\nfilter size 5 x 5, 32 feature maps, stride 1, padding 2, and a ReLU\n\nactivation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.01\n\nSpatial Bate!\n\nMax Pooling Layer - fi\n\nConvolution:\nactivation fw\n\nSpatial Bate!\n\nal Layer -\n\nMax Pooling Layer - fi\n\nh Normalization Layer\n\nter size 2\n\nfilter size 5 x 5, 64 feature maps, stride 1, padding 2, and a ReLU\n\nnction with bias 0 and Gaussian noise with mean 0 and standard deviation 0.01\nh Normalization Layer\n\nter size 2\n\nFully Connected Layer - 64 hidden units, and a ReLU activation function with bias 0 and\nGaussian noise with mean 0 and standard deviation 0.1\n\nSpatial Bate!\n\nh Normalization Layer\ne Outer Output Layer - 10 outputs and output standard deviation of 0.1"}, {"section_index": "17", "section_name": "PSEUDOCODE", "section_text": "Algorithm|I] provides the pseudocode for L-SR1 with trust region method, while Algorithm[2]pr\nvides the pseudocode for L-SR1 with line search.\nAdditionally, the network was trained with L2 regularization with parameter 0.001. Training loss\nwas measured as softmax cross entropy, while test loss was measured as multi-class error count. In\nthe case of the first order methods, the learning rate was set to 0.01 where needed, and the momentum\nwas set to 0.9, where needed. AdaDelta did not take any parameters."}]
S1Y0td9ee
[{"section_index": "0", "section_name": "SHIFT AGGREGATE EXTRACT NETWORKS", "section_text": "2 12 . 2 Pe\nFrancesco Orsini , Daniele Baracchi and Paolo Frasconi\nThe Shift Aggregate Extract Network (SAEN) is an architecture for learning repre-\nsentations on social network data. SAEN decomposes input graphs into hierarchies\nmade of multiple strata of objects. Vector representations of each object are learnt\nby applying shift, aggregate and extract operations on the vector representations\nof its parts. We propose an algorithm for domain compression which takes ad-\nvantage of symmetries in hierarchical decompositions to reduce the memory us-\nage and obtain significant speedups. Our method is empirically evaluated on real\nworld social network datasets, outperforming the current state of the art."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many different problems in various fields of science require the classification of structured data,\nie. collections of objects bond together by some kind of relation. A natural way to represent such\nstructures is through graphs, which are able to encode both the individual objects composing the\ncollection (as vertices) and the relationships between them (as edges). A number of approaches to\nthe graph classification problem has been studied in graph kernel and neural network literature.\nGraph kernels decompose input graphs in substructures such as shortest paths (Borgwardt & Kriegel,\n2005), graphlets (Shervashidze et al., 2009) or neighborhood subgraph pairs (Costa & De Grave,\n2010). The similarity between two graphs is then computed by comparing the respective sets of\nparts. Methods based on recursive neural networks unfold a neural network over input graphs and\nlearn vector representations of their nodes employing backpropagation though structure (Goller &\nKuchler, 1996). Recursive neural networks have been successfully applied to domains such as nat-\nural language (Socher et al., 2011) and biology (Vullo & Frasconi, 2004; Baldi & Pollastri, 2003).\nAn advantage of recursive neural networks over graph kernels, is that the vector representations of\nthe input graphs are learnt rather than handcrafted.\nWe propose Shift Aggregate Extract Networks (SAEN), a neural network architecture for learning\nrepresentations of input graphs. SAEN decomposes input graphs into H-hierarchies made of multiple\nstrata of objects. Objects in each stratum are connected by \u201cpart-of\u201d relations to the objects to the\nstratum above."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Learning on social network data can be considerably hard due to their peculiar structure: as opposed\nto chemical compounds and parse trees, the structure of social network graphs is highly irregular.\nIndeed in social networks it is common to have nodes in the same graph whose degree differs by\norders of magnitude. This poses a significant challenge for the substructure matching approach used\nby some graph kernels as the variability in connectivity generates a large number of unique patterns\nleading to diagonally dominant kernel matrices.\nIn case we wish to classify graphs we can use an H-hierarchical decomposition in which the top\nstratum contains the graph G that we want to classify, while the intermediate strata contain subgraphs\nof G, subgraphs of subgraphs of G and so on, until we reach the bottom stratum which contains the\nvertices v of G.\nUnlike R-convolution relations in kernel methods (which decompose objects into the set of thei\nparts), H-hierarchical decompositions are deep as they can represent the parts of the parts of a\nobject.\nRecursive neural networks associate to the vertices of the input graphs vector representations impos-\ning that they have identical dimensions. Moreover, the propagation follows the edge connectivity\nand weights are shared over the whole input graph. If we consider that vector representations of\nnodes (whose number of parents can differ by orders of magnitude) must share the same weights,\nlearning on social network data with recursive neural networks might be nontrivial.\nSAEN compensates the limitations of recursive neural networks by adding the following degrees o:\nflexibility:\n1. the SAEN computation schema unfolds a neural network over H-decompositions instead of the\ninput graph,\n\n2. SAEN imposes weight sharing and fixed size of the learnt vector representations on a per straturr\nbasis instead of globally.\nAnother contribution of this paper is the introduction of a domain compression algorithm, that we\nuse in our experiments to reduce memory usage and runtime. Domain compression collapses objects\nin the same stratum of an H-hierarchical decomposition into a compressed one whenever these\nobjects are indistinguishable for the SAEN computation schema. In particular objects made of the\nsame sets of parts are indistinguishable. In order obtain a lossless compression an H-hierarchical\ndecomposition we store counts on symmetries adopting some mathematical results from lifted linear\nprogramming (Mladenov et al., 2012). The domain compression algorithm is also reminiscent of the\nwork of Sperduti & Starita (1997) in which common substructures of recursive neural networks are\ncollapsed in order to reduce the computational cost.\nWe propose a neural network architecture that takes as input an undirected attributed graph G =\n(V, E, X) where V is the vertex set, E C V x V is the edge set, and X = {x, \u20ac R?}yev is:\nset of p-dimensional vertex attributes. When vertices do not have associated attributes (for exampl\nthis happens in some of the social network datasets of \u00a7 4.1), we can set x,, to some vertex invarian\nsuch as node centrality or betweenness.\nMost graph kernels decompose graphs into parts by using an R-convolution relation (Haussler,\n1999). We extend this approach by decomposing graphs into a hierarchy of 7-parametrized \u201cpart\nof\u201d relations. Formally, an H-hierarchical decomposition is a pair ({S)}~_,, {R, ..}~_,) where:\ne {Si }{< are disjoint sets of objects S; called strata, or levels of the hierarchy. The bottom stratum\nSo contains non-decomposable objects (e.g. individual vertices), while the other strata S;, | =\n1,..., L contain composite objects, 0; \u20ac S;, whose parts 0; \u20ac Sj_1 belong to the preceding stratum,\nSi-1.\n\n\u00a9 {Rix}, is a set of 1, t-parametrized Rj,,-convolution relations. A pair (0;,0;) \u20ac Sy x Sj-1\nbelongs to Ry, iff \u201co; is part of o; with membership type 7\u201d. For notational convenience, the parts\nof o- are denoted ac Ra! (n-) \u2014 fo-\\(n- o-) C Ryd\nThe membership type 7 is used to represent the roles of the parts of an object. For example, we\ncould decompose a graph as a multiset of 7-neighborhood subgraphs ! in which 7 is the radius o:\nthe neighborhoods (see Figure 1 on the left). Another possible use of the 7 membership type is t\u00a2\n\u2018The r-neighborhood subgraph (or ego graph) of a vertex v in a graph G is the induced subgraph of G\nconsisting of all vertices whose shortest-path distance from v is at most r.\nIndeed SAEN allows to use vector representations of different sizes for different strata of objects\n(e.g. graphs, subgraphs, subgraphs of subgraphs, edges, vertices etc.) The SAEN schema computes\nthe vector representation of each object by applying shift, aggregate and extract operations on the\nvector representations of its parts.\nFigure 1: Image of an H-hierarchical decomposition (in particular the EGNN explained in \u00a7 4.2).\nOn the left we decompose a graph into rooted ego graphs of radius 0 and 1, while on the right we\ndecompose an ego graph into the set of its vertices. The directed arrows represent \u201cpart of\u201d relations\nlabeled with their membership type 7. The membership type 7 represents the radius 7 = 0, 1 of the\nego graphs (decomposition on the left) and the role (i.e. 7 = ROOT, ELEM) of a vertex in the ego\ngraph (decomposition on the right) respectively.\nWe propose Shift Aggregate Extract Network (SAEN) to learn vector representations for all th\nobjects of all the strata {5;}/., in an H-hierarchical decomposition. SAEN unfolds a neural net\nwork architecture over an -hierarchical decomposition by using the Shift Aggregate Extract (SAE\nschema.\nAccording to the SAE schema the vector representation of each object in the H-hierarchical decom-\nposition is either computed by applying a neural network on the vertex attributes (for the objects in\nbottom stratum) or defined in terms of the vector representations of its parts (for the other objects).\nMore formally, the SAE schema associates a d)-dimensional representation h; \u20ac R\u00ae to each object\n0; \u20ac S of the H-hierarchical decomposition according to the following formula:\nane\n\nwell,\n\nfo(Xv,; 0) if 0; \u20ac So\n> (Zr @ hy); \u00a9; otherwise\nYS\n\nOjER, (0%) Shift\n\nAggregate\n\nExtract\nwhere f;(-;0,), 1 =0,..., LZ are multilayer neural networks with parameters OQ).\nThe recursion step (second branch of Eq. 1) follows the Shift Aggregate Extract (SAE) schema:\nAn H-hierarchical decomposition is a multilevel generalization of R-convolution relations, and it\nreduces to an R-convolution relation for L = 1.\nWe (5,\n\nfo(%\u00bb,; 0)\n\nret 0;\u20acR_1(0;)\n\nAggregate\n\nExtract\n\nif 0; \u20ac So\n\n(Zr @ hy); \u00a9; otherwise\nYS\n\nShift\nWith respect to the base case (first branch of Eq. 1) we have that each object 0; in the bottom stratum\n\u2018So is in one-to-one correspondence with the vertices vu; \u20ac V of the graph that we are decomposing.\nIndeed the vector representations h; are computed by evaluating f(-; 9) in correspondence of the\nvertex attributes x,,, \u20ac X.\nFigure 2: Pictorial representation of the H-hierarchical decomposition of a graph taken from the\nIMDB-BINARY dataset (see \u00a7 4.1) together with its compressed version.\nThe shift and aggregate steps, that we have seen so far, are identical to those used in kernel desigr\nwhen computing the explicit feature of a kernel k(x, z) derived from a sum ) 1 ka (2, 2) of base\nkernels k,(x, z), + \u20ac IL. In principle, it would be indeed possible to turn SAEN into a kernel methoc\nby removing the extraction step E from the SAE schema. However, such an approach would increas\u00ab\nthe dimensionality of the feature space by a multiplicative factor |II;| for each level | of the H-\nhierarchical decomposition, thus leading to an exponential number of features. When using SAEN\nthe feature space growth is prevented by exploiting a distributed representation (via a multilayerec\nneural network) during the E step of the SAE schema. As a result, SAEN can easily cope with H.\nhierarchical decompositions consisting of multiple strata."}, {"section_index": "3", "section_name": "\u00bb.3. EXPLOITING SYMMETRIES FOR DOMAIN COMPRESSION", "section_text": "In this section we propose a technique, called domain compression, which allows to save memory\nand speedup the SAEN computation. Domain compression exploits symmetries in H-hierarchical de-\ncompositions by collapsing equivalent objects in each stratum. The greater the number of collapsed\nobjects the highest the compression ratio.\nTwo objects a, b in a stratum S; are collapsable a ~ 6 if they share the same representation (i.e.\nh, = hy) for all the possible values of \u00a9;. A compressed stratum S7\u00b0\u201d\"\u201d is the quotient set 5i/~ of\nstratum S) w.r.t. the collapsibility relation ~. We assume that the attributes of the elements in the\nbottom stratum Spo are categorical, so that the same vector representation can be shared by multiple\nelements with non-zero probability. 7 While objects in the bottom stratum So are collapsable when\ntheir attributes are identical, for all the other strata S,, | = 1,..., L, objects are collapsable if they\nare made by the same sets of parts for all the membership types 7.\nIn Figure 2 we provide a pictorial representation of the domain compression of an H-hierarchical\ndecomposition (EGNN, described in \u00a7 4.2). On the left we show the H-hierarchical decomposition\nof a graph taken from the IMDB-BINARY dataset (see \u00a7 4.1) together with its compressed version on\nthe right."}, {"section_index": "4", "section_name": "2.3.1 DOMAIN COMPRESSION ALGORITHM", "section_text": "In order to compress H-hierarchical decompositions we adapt the lifted linear programming tech-\nnique proposed by Mladenov et al. (2012) to the SAEN architecture. If a matrix MW \u20ac R\u201d*? has\n\u201c Vectors of real valued attributes could be discretized using clustering techniques. However, we leave\ndiscretization in SAEN to future works.\nmake sure that vector representations h; of object parts will fall in the same slot if and only if they\nhave the same membership type 7.\ne Aggregate: the shifted representations (z, \u00ae h,) of the parts 0; are then aggregated with a sum.\ne Extract: the aggregated representation is compressed to a dj-dimensional space by a \u00a9)-\nparametrized nonlinear map f,(-,;) : R'\u2122\"4\u20141! + R@ implemented with a multilayer neural\nnetwork.\nm <n distinct rows it can be decomposed as the product DM\u00b0\u00b0\"? where M\u00b0\u00b0\u2122? is a compressed\nversion of M in which the distinct rows of M appear exactly once. The Boolean decompression\nmatrix, D, encodes the collapsibility relation among the rows of M so that D;; = 1 iff the i!\u201d row\nof M falls in the equivalence class j of ~. A pseudo-inverse C' of D can be computed by dividing\nthe rows of D' by their sum (where D' is the transpose of D).\nExample 1 [f we look at matrix M in Eq. 2 we notice that row 1 and 4 share the encoding |0,0, 0],\nrows 3 and 5 share the encoding |\\,1,0| while the encoding (1,0, 1] appears only once at row 2.\nMatrix M\u00b0\u00b0\u2122? is the compressed version of M.\n0\n\n0\n\nFORGO\n\ncocro\n\nAycomn \u2014\n\n0\n1\n1\n\n0\n0\n1\n\n0,\n\n0\n1) D=\n\ncoooro\n\nHORCSO\n\nC=\n\nVa\n0\n\n0\n\n0\n1\n0\n\n0\n0\n\n1/2\n\n12\n0\n0\n\n0-\n0\nYa,\nMatrix M can be expressed as the matrix product between the decompression matrix D and the\ncompressed version of M\u00b0\u00b0\u2122? (i.e. M = DM\u00b0\u2122?), while the matrix multiplication between the\ncompression matrix C and the M leads to the compressed matrix M\u00b0\u00b0\u2122\u00ae? (i.e Me\u2122? = CM).\nTo apply domain compression we rewrite Eq. | in matrix form as follow\nfo(X; Oo)\nSS\n\n|So|xdo\ne H, \u20ac R'*|*4 is the matrix that represents the d)-dimensional encodings of the objects in 5}.\nThe rows of H; are the vector representations h,; in Eq. 1, while the rows of Hj_, are the vector\nrepresentations h,; in Eq. 1;\ne X \u20ac R!%0I*? is the matrix that represents the p-dimensional encodings of the vertex attributes it\nV (i.e. the rows of X are the x,, of Eq. 1);\n\ne f,(-;Q;) is unchanged w.r.t. Eq. 1 and is applied to its input matrices row-wise;\n\ne Ry \u20ac RIS*!St-11 Vr \u00a9 Tl, are the matrix representations of the 72;,,,-convolution relations o\nEq. 1 whose elements are (R).,)i; = 1 if (0;,0;) \u20ac Ry and 0 otherwise.\nDomain compression on Eq. 3 is performed by the DOMAIN-COMPRESSION procedure (see Algo-\nrithm 3) that takes as input the attribute matrix X and the part-of matrices R;,, and returns thei\ncompressed versions X\u00b0?\u2122? and the R;\u00b0\"\"\u201d respectively. The algorithm starts by invoking (line 1)\nthe procedure COMPUTE-CD on X to obtain the compression and decompression matrices Co anc\nDo respectively. The compression matrix Co is used to compress X (line 2) then we start iterating\nover the levels 1 = 0,..., L of the H-hierarchical decomposition (line 4) and compress the R;,,\nmatrices. The compression of the R;,, matrices is done by right-multiplying them by the decom.\npression matrix D)_, of the previous level / \u2014 1 (line 5). In this way we collapse the parts of relatior\nRix (ie. the columns of R;,,) as these were identified in stratum S;_1 as identical objects (i.e\nthose objects corresponding to the rows of X or Rj-1,, collapsed during the previous step). The\nresult is a list ROO\"? = [R,,Di-1, Va = 1,..., |I)|] of column compressed R;,,\u2014matrices\nWe proceed collapsing equivalent objects in stratum 5), i.e. those made of identical sets of parts\nwe find symmetries in R\u00b0!\u00a9\u00b0\u2122? by invoking COMPUTE-CD (line 6) and obtain a new pair C;, D,\nof compression, and decompression matrices respectively. Finally the compression matrix C; is ap-\nplied to the column-compressed matrices in R\u00b0\u00b0!-\u00a9\u00b0\u2122? in order to obtain the II, compressed matrices\nfo(X; @o)\neY SS\n\n[Sol xdo\n\nHa... 0\nfi} [Rias.--, Riess Rijn] : me 2 150)\nCacia eli gamma aban)\n[Si] [Ee S111 Oo Ae\npena 7\n\n[Mi ||S1\u20141 |< [He |de\u2014a\nee\n|Si|xdi\n\nifl=0\n\notherwise\n\n(3)\nCOIADUNHWNK\n\nCo, Do = COMPUTE-CD(X)\nxeomP = CoX // Compress the X matrix.\nReomP = {} // Initialize an empty container for compressed matrices.\nfor! = 1toL\nReelcomp \u2014 [R, ,Di-1, Vr = 1,...,|Ti{] / column compression\nC, Di = COMPUTE-CD(R\u00b0\u00b0-\u00a9o\u2122?)\nfor 7 = 1 to |II;|\nRe? = CO Reel--mP_// row compression\nreturn Xp. Reomp\nof stratum S; (line 8). Algorithm 3 allows us to compute the domain compressed version of Eq. 3\nwhich can be obtained by replacing: X with X\u00b0\u201d\"? = CoX, Rix with Ri\"? = C)Ri,,Di-1 and\n\nH, with H7\u00b0\u201d\"?. Willing to recover the original encodings H; we just need to employ the decom-\npression matrix D; on the compressed encodings H\u00b0\u00b0?\u201d\"\"\u201d, indeed H; = D, H\u00b0\u00b0?\u2122\u201d.\nAs we can see by substituting S; with S/\u00b0\u201d\"\u201d, the more are the symmetries (i.e. when |S7\u00b0\u201d\"?| <\n|.S,|) the greater the domain compression will be.\nAmong the patterns considered from the graph kernel literature we have paths, shortest paths,\nwalks (Kashima et al., 2003), subtrees (Ramon & Gartner, 2003; Shervashidze et al., 2011) and\nneighborhood subgraphs (Costa & De Grave, 2010). The similarity between graphs G and G\u2019 is\ncomputed by counting the number of matches between their common the substructures (i.e. a kernel\non the sets of the substructures). The match between two substructures can be defined by using\ngraph isomorphism or some other weaker graph invariant.\nWhen the number of substructures to enumerate is infinite or exponential with the size of the graph\n(perhaps this is the case for random walks and shortest paths respectively) the kernel between the\ntwo graphs is computed without generating an explicit feature map. Learning with an implicit fea-\nture map is not scalable as it has a space complexity quadratic in the number of training examples\n(because we need to store in memory the gram matrix).\nOther graph kernels such as the Weisfeiler-Lehman Subtree Kernel (WLST) (Shervashidze et al.,\n2011) and the Neighborhood Subgraph Pairwise Distance Kernel (NSPDK) (Costa & De Grave,\n2010) deliberately choose a pattern generator that scales polynomially and produces an explicit\nfeature map. However the vector representations produced by WLST and NSPDK are handcrafted\nand not learned.\nA recent work by Yanardag & Vishwanathan (2015) proposes to uses pattern generators such as\ngraphlets, shortest paths and WLST subtrees to transform input graphs into documents. The gener-\nated substructures are then treated as words and embedded in the Euclidean space with a CBOW\nor a Skip-gram model. The deep upgrade of existing graph kernels is performed by reweighing the\ncounts of the substructures by the square root of their word-vector self similarity.\nAnother recent work by Niepert et al. (2016) upgrades the convolutional neural networks CNNs fot\nimages to graphs. While the receptive field of a CNN is usually a square window (Niepert et al.\n2016) employ neighborhood subgraphs as receptive fields. As nodes in graphs do not have a specific\ntemporal or spatial order, (Niepert et al., 2016) employ vertex invariants to impose an order on the\nnodes of the subgraphs/receptive fields.\nWhen learning with graph inputs two fundamental design aspects that must be taken into account are:\nthe choice of the pattern generator and the choice of the matching operator. The former decomposes\nthe graph input in substructures while the latter allows to compare the substructures.\nIn order to answer the experimental questions we tested our method on six publicly available dataset\nfirst proposed by Yanardag & Vishwanathan (2015).\ne@ COLLAB is a dataset where each graph represent the ego-network of a researcher, and the task\nto determine the field of study of the researcher between High Energy Physics, Condensed Matte\nPhysics and Astro Physics.\n\ne IMDB-BINARY, IMDB-MULTI are datasets derived from IMDB where in each graph the ve\ntices represent actors/actresses and the edges connect people which have performed in the sam\nmovie. Collaboration graphs are generated from movies belonging to genres Action and Romanc\nfor IMDB-BINARYand Comedy, Romance and Sci-Fi for IMDB-MULTI, and for each actor/actress i\nthose genres an ego-graph is extracted. The task is to identify the genre from which the ego-grap\nhas been generated.\n\ne REDDIT-BINARY, REDDIT-MULTI5K, REDDIT-MULTI12K are datasets where each graph is de\nrived from a discussion thread from Reddit. In those datasets each vertex represent a distinct ust\nand two users are connected by an edge if one of them has responded to a post of the other i\nthat discussion. The task in REDDIT-BINARYis to discriminate between threads originating frot\na discussion-based subreddit (TrollXChromosomes, atheism) or from a question/answers-base\nsubreddit (/AmA, AskReddit). The task in REDDIT-MULTISKand REDDIT-MULTI!12Kis a mult\nclass classification problem where each graph is labeled with the subreddit where it has originate\n(worldnews, videos, AdviceAnimals, aww, mildlyinteresting for REDDIT-MULTISKand AskReddi\nAdviceAnimals, atheism, aww, IAmA, mildlyinteresting, Showerthoughts, videos, todayilearne.\nqonrldnewe Trall\u00a5Chrampcomeec far RENNIT_MITT TI19K)"}, {"section_index": "5", "section_name": "4.2 EXPERIMENTS", "section_text": "In our experiments we chose an H-hierarchical decomposition called Ego Graph Neural Networl\n(EGNN), that mimics the graph kernel NSPDK with the distance parameter set to 0.\nBefore applying EGNN we turn unattributed graphs (V, \u00a3) into attributed graphs (V, E, X) by an-\nnotating their vertices v \u20ac V with attributes x, \u20ac X. We label vertices v of G with their degree and\nencode this information into the attributes x, by employing the 1-hot encoding.\nE1 We experimented with SAEN applying the EGNN H-decomposition on all the datasets. For each\ndataset, we manually chose the parameters of SAEN, i.e. the number of hidden layers for eact\nstratum, the size of each layer and the maximum radius R. We used the Leaky ReLU (Maas et al.)\nactivation function on all the units. We report the chosen parameters in Table Al of the appendix\nIn all our experiments we trained the neural networks by using the Adam algorithm to minimize <\ncross entropy loss.\nThe classification accuracy of SAEN was measured with 10-times 10-fold cross-validation. We man\nually chose the number of layers and units for each level of the part-of decomposition; the numbe\nof epochs was chosen manually for each dataset and we kept the same value for all the 100 runs o\nthe 10-times 10-fold cross-validation.\nEGNN decomposes attributed graphs G = (V, \u00a3, X) into a 3 level H-hierarchical decomposition\nwith the following strata (see Ficure 1 for a pictorial representation of EGNN):\nwith the following strata (see Figure | for a pictorial representation of EGNN):\n\ne stratum So contains objects o, that are in one-to-one correspondence with the vertices v \u20ac V.\n\ne stratum Sj contains v;.\u00a2-rooted r-neighborhood subgraphs (i.e. ego graphs) e = (Voor, Ve, Ee)\nof radius r = 0,1,..., R and has part-of alphabet II, = {ROOT,ELEM}. Objects 0, \u20ac So are\n\u201cELEM-part-of\u201d ego graph e if v \u20ac Vz \\ {vroot}, while the are \u201cROOT-part-of\u201d ego graph e if\nU = Vroot+\n\ne stratum $5 contains the graph G that we want to classify and has part-of alphabet II, = {0,1}\nwhich correspond to the radius of the ego graphs e \u20ac S} of which G is made of.\nFigure 4: Comparison of accuracy results.\n\nDATASET DGK PSCN SAEN\n(Yanardag et al. 2015) | (Niepert et al., 2016) | (our method)\nCOLLAB 73.09 + 0.25 72.60 + 2.16 75.63 +0.31\nIMDB-BINARY 66.96 + 0.56 71.00 + 2.29 71.26 + 0.74\nIMDB-MULTI 44.55 + 0.52 45.23 + 2.84 49.11 +0.64\nREDDIT-BINARY 78.04 + 0.39 86.30 + 1.58 86.08 + 0.53\nREDDIT-MULTISK 41.27 + 0.18 49.10 + 0.70 52.24 + 0.38\nREDDIT-MULTI12K 32.22 + 0.10 41.32 + 0.42 46.72 + 0.23\nThe mean accuracies and their standard deviations obtained by our method are reported in Ta-\nble 4, where we compare these results with those obtained by Yanardag & Vishwanathan (2015)\nand by Niepert et al. (2016).\nAlthough our method was conceived for social network data, it can also handle other types of graphs\nFor the sake of completeness in Table 5 we report the mean accuracies obtained with SAEN on th\nmolecule and protein datasets studied in previous works (e.g. Niepert et al. (2016)).\nTable 1: Comparison of sizes and runtimes of the datasets before and after the compression\n\na ne n\nDATASET SIZE (MB) RUNTIME\n\nORIGINAL | COMP. | RATIO | ORIGINAL | COMP. | SPEEDUP\nCOLLAB 1190 448 0.38 43\u2019 18\u201d 8\u00b0 20\u201d 5.2\nIMDB-BINARY 68 34 0.50 3\u00b0 9\u201d 0\u00b0 30\u201d 6.3\nIMDB-MULTI 74 40 0.54 T 41\u201d 1 54\u201d 4.0\nREDDIT-BINARY 326 56 0.17 TO 2\u2019 35\u201d > 100.0\nREDDIT-MULTISK 952 162 0.17 OOM 9\u00b0 51\u201d -\nREDDIT-MULTI12K 1788 347 0.19 OOM 29\u00b0 55\u201d _\nE2 In Table 1 we show the file sizes of the preprocessed datasets before and after the compression\ntogether with the data compression ratio. > We also estimate the benefit of the relational compression\nfrom a computational time point of view and report the measurement of the runtime for 1 run with\nand without compression together with the speedup factor.\nFor the purpose of this experiment, all tests were run on a computer with two 8-cores Intel Xeor\nES-2665 processors and 94 GB RAM. Uncompressed datasets which exhausted our server\u2019s memory\nduring the test are marked as \u201cOOM\u201d (out of memory) in the table, while those who exceeded the\ntime limit of 100 times the time needed for the uncompressed version are marked as \u201cTO\u201d (timeout)"}, {"section_index": "6", "section_name": "4.3 DISCUSSION", "section_text": "A1 As shown in Table 4, EGNN performs consistently better than the other two methods on all the\nsocial network datasets. This confirms that the chosen -hierarchical decomposition is effective or\nthis kind of problems. Also the results for molecule and protein datasets (see Table 5) are in line\nwith the current state of the art.\n\nTr ig pl a a a i. i, | ec a a a . oe,\nA2 The compression algorithm has proven to be effective in improving the computational cost of our\nmethod. Most of the datasets improved their runtimes by a factor of at least 4 while maintaining the\nThe size of the uncompressed files are shown for the sole purpose of computing the data compression ratio.\nIndeed the last version of our code compresses the files on the fly."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "P Baldi and G Pollastri. The principled design of large-scale recursive neural network architectures\u2014\ndag-rnns and the protein structure prediction problem. J Mach Learn Res, 4(Sep):575\u2014602, 2003.\nD Haussler. Convolution kernels on discrete structures. Technical report, Citeseer, 1999.\nH Kashima, K Tsuda, and A Inokuchi. Marginalized kernels between labeled graphs. In JCML-03\nvolume 3, pp. 321\u2014328, 2003.\nsame expressive power. Moreover, experiments on REDDIT-MULTISK and REDDIT-MULTI12K have\nonly been possible thanks to the size reduction operated by the algorithm as the script exhausted the\nmemory while executing the training step on the uncompressed files.\nWe proposed SAEN, a novel architecture for learning vector representations of 71-decompositions\nof input graphs. We applied SAEN for graph classification on 6 real world social network datasets,\noutperforming the current state of the art on 4 of them and obtaining state-of-the-art classification\naccuracy on the others. Another important contribution of this paper is the domain compression\nalgorithm which greatly reduces memory usage and allowed us to speedup the training time of a\nfactor of at least 4.\nA Vullo and P Frasconi. Disulfide connectivity prediction using recursive neural networks and\nevolutionary information. Bioinformatics, 20(5):653-659, 2004.\n\nP Yanardag and SVN Vishwanathan. Deep graph kernels. In Proc. of KDD-15, pp. 1365-1374,\n2015.\n2 12 . 2 Pe\nFrancesco Orsini , Daniele Baracchi and Paolo Frasconi"}, {"section_index": "8", "section_name": "\\PPENDIX: SHIFT AGGREGATE EXTRACT NETWORKS", "section_text": "In Table Al we report for each dataset: the radiuses r of the neighborhood subgraphs used in the\nEGNN decomposition and the number of units in the hidden layers for each stratum.\n\u2018igure Al: Parameters for the neural networks used in the experiments.\n\nKATA OGrT | narnttrerne | TITTATAPAT TIATI TO\nDATASET RADIUSES HIDDEN UNITS\nr So Sy So\n\nCOLLAB 0, 5-5 5-2 5-3\nIMDB-BINARY 0,1,2 2 5-2 5-3-1\nIMDB-MULTI 0,1,2 2 5-2 5-3\nREDDIT-BINARY 0, 0-5 5-2 5-3-1\nREDDIT-MULTI5K 0, 0 10 6-5\nREDDIT-MULTI12K | 0, 0 10 20-11\nMUTAG 0,1,2,3 0 5-5 5-5-1\nPTC 0, 5 15 15-1\nNcI1 0,1,2,3 5 15 15\u201410-1\nPROTEINS 0,1,2,3 3-2 6-5-4 6-3-1\nD&D 0,1,2,3 0 5-2 5-3-1\n4\naon i lan\n\n| | mls l i\npx ep epedia | po) | oo\nall | tot tb oflisin I!\nNhoisinino Apo a aow\n~t\n\n|\n\nNANANN ho IN\nofl | ll ooll ie |!\nDhaoinisin Sapa or\n\nha >\n\n\u201c_\n\n| | |\nja Yay ooojo 1.9 So\nQ NN oO\no> DD OD\n\nNN an ANN\nr\\ISSSSSS|\\SoSdcSoS"}]
Sy2fzU9gl
[{"section_index": "0", "section_name": "B-VAE: LEARNING BASIC VISUAL CONCEPTS WITH A\nCONSTRAINED VARIATIONAL FRAMEWORK", "section_text": "Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot,\nMatthew Botvinick, Shakir Mohamed, Alexander Lerchner\nGoogle DeepMind\n{irinah, lmatthey,arkap, cpburgess,glorotx,\noot vinick, shakir, lerchner}@google.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Unsupervised learning of a disentangled posterior distribution over the underlying generative factors\nof sensory data is a major challenge in AI research (Bengio et al. |2013} {Lake et al.|/2016). Most\nprevious attempts required a priori knowledge of the number and/or nature of the data generative\nfactors (Hinton et al.| {2011} |Rippel & Adams} |2013} |Reed et al.| 2014} |Zhu et al.| 2014} |Yang\net al|[2015}|Goroshin et al.)[2015||Kulkarni et al ][2015}|Cheung et al [2015}|Whitney et al.||2016\nKaraletsos et al. . This is not always feasible in the real world, where the newly initialised\nlearner may be exposed to complex data where no a priori knowledge of the generative factors exists\nand little ta no eninervicion far diccaverino the factore ic availahle TIntil recently nurely wnennerviced"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Learning an interpretable factorised representation of the independent data gen-\n\nerative factors o\n\nthe world without supervision is an important precursor for the\n\ndevelopment of artificial intelligence that is able to learn and reason in the same\nway that humans do. We introduce 6-VAE, a new state-of-the-art framework for\n\nautomated disco\n\nvery of interpretable factorised latent representations from raw\n\nimage data in a completely unsupervised manner. Our approach is a modification\n\nof the variational!\nperparameter (3 tl\n\nautoencoder (VAE) framework. We introduce an adjustable hy-\nhat balances latent channel capacity and independence constraints\n\nwith reconstruction accuracy. We demonstrate that 6-VAE with appropriately tuned\n8 > 1 qualitatively outperforms VAE ({ = 1), as well as state of the art unsu-\npervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled\nfactor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we\n\ndevise a protoco.\n\nto quantitatively compare the degree of disentanglement learnt\n\nby different models, and show that our approach also significantly outperforms\nall baselines quantitatively. Unlike InfoGAN, 3-VAE is stable to train, makes few\nassumptions about the data and relies on tuning a single hyperparameter 8, which\n\ncan be directly o\n\nptimised through a hyperparameter search using weakly labelled\n\ndata or through heuristic visual inspection for purely unsupervised data.\nThe difficulty of learning a task for a given machine learning approach can vary significantly\ndepending on the choice of the data representation. Having a representation that is well suited to the\nparticular task and data domain can significantly improve the learning success and robustness of the\nchosen model (Bengio et al.|2013). It has been suggested that learning a disentangled representation\nof the generative factors in the data can be useful for a large variety of tasks and domains\net al. {2013} [Ridgeway] |2016). A disentangled representation can be defined as one where single\nlatent units are sensitive to changes in single generative factors, while being relatively invariant to\nchanges in other factors (Bengio et al. (Bengio et al.|/2013). For example, a model trained on a dataset of 3D objects\nmight learn independent latent units sensitive to single independent data generative factors, such as\nity, position, scale, lighting or colour, thus acting as an inverse graphics model (Ku\n. In a disentangled representation, knowledge about one factor can generalise to novel\nconfigurations of other factors. According to , disentangled representations could\nboost the performance of state-of-the-art AI approaches in situations where they still struggle but\nwhere humans excel. Such scenarios include those which require knowledge transfer, where faster\nlearning is achieved by reusing learnt representations for numerous tasks; zero-shot inference, where\nreasoning about new data is enabled by recombining previously learnt factors; or novelty detection.\n5 & a)\n\n(uonejos) ynuuIZzy (2) (a}!uWs) uonowe (q) (@buny) sey (>\nFigure 1: Manipulating latent variables on celebA: Qualitative results comparing disentangling\nperformance of 3-VAE (8 = 250), VAE (8 = 1) and InfoGAN (Chen\nfet al.] 2016). In all figures of latent code traversal each block corresponds to the traversal of a single\nlatent variable while keeping others fixed to either their inferred (8-VAE, VAE and DC-IGN where\napplicable) or sampled (InfoGAN) values. Each row represents a different seed image used to infer\nthe latent values in the VAE-based models, or a random sample of the noise variables in InfoGAN\n8-VAE and VAE traversal is over the [-3, 3] range. InfoGAN traversal is over ten dimensional\ncategorical latent variables. Only 8-VAE and InfoGAN learnt to disentangle factors like azimuth\n(a), emotion (b) and hair style (c), whereas VAE learnt an entangled representation (e.g. azimuth is\nentangled with emotion, presence of glasses and gender). InfoGAN images adapted from|Chen et al\n\n(2016). Reprinted with permission."}, {"section_index": "3", "section_name": "approaches to disentangled factor learning have not scaled well (Schmidhuber| |1992} [Desjardins\net al.| 2012} Tang et al.| 2013}|Cohen & Welling\u2019 2014} 2015).", "section_text": "Recently a scalable unsupervised approach for disentangled factor learning has been developed\n-alled InfoGAN (Chen et al.|{2016). InfoGAN extends the generative adversarial network (GAN\n(2014) framework to additionally maximise the mutual information between <\nsubset of the generating noise variables and the output of a recognition network. It has been reportec\no be capable of discovering at least a subset of data generative factors and of learning a disentanglec\nepresentation of these factors. The reliance of InfoGAN on the GAN framework, however, comes\nit the cost of training instability and reduced sample diversity. Furthermore, InfoGAN requires\nsome a priori knowledge of the data, since its performance is sensitive to the choice of the priot\nlistribution and the number of the regularised noise variables. InfoGAN also lacks a principlec\nnference network (although the recognition network can be used as one). The ability to infer the\nyosterior latent distribution from sensory input is important when using the unsupervised model ir\nransfer learning or zero-shot inference scenarios. Hence, while InfoGAN is an important step in the\night direction, we believe that further improvements are necessary to achieve a principled way o!\nising unsupervised learning for developing more human-like learning and reasoning in algorithms as\n\nlescribed by\nFinally, there is currently no general method for quantifying the degree of learnt disentanglement.\nTherefore there is no way to quantitatively compare the degree of disentanglement achieved by\ndifferent models or when optimising the hyperparameters of a single model.\nDC-IGN InfoGAN B-VAE VAE\n\nF -euna~ BEROOe LO aes\n5 bee aad rere e4 Lttessas Saye\neee wee ew ew wo BPP\n@eyerns BRbers Gadd\nFactor ot art Pattee Gesass = MsJII\nBeane @wweeer== @Saeeee\n\nROCCSy dacece\nFactor not learnt Factor not learnt aa P P| g 4 4 a \\ ee\n\nmee FPF dag = ee\n\n(c) leg style (b) width (a) azim\n(b) width\n\nFactor not learnt\nFigure 2: Manipulating latent variables on 3D chairs: Qualitative results comparing disentangling\nperformance of 6-VAE (6 = 5), VAE (Kingma & Welling} |2014) (3 = 1), InfoGAN (Chen et al.\n2016) and DC-IGN (Kulkarni et al.}!2015). InfoGAN traversal is over the [-1, 1] range. VAE always\n\nlearns an entangled representation (e.g. chair width is entangled with azimuth and leg style (b))\nAll models apart from VAE learnt to disentangle the labelled data generative factor, azimuth (a)\nInfoGAN and 6-VAE were also able to discover unlabelled factors in the dataset, such as chair width\n(b). Only 6-VAE, however, learnt about the unlabelled factor of chair leg style (c). InfoGAN and\n\nDC-IGN images adapted from|Chen et al. 2016) and|Kulkarni et al. 2015p, respectively. Reprinted\n\nwith permission.\nWe propose augmenting the original VAE framework with a single hyperparameter ( that modulates\nthe learning constraints applied to the model. These constraints impose a limit on the capacity of\nthe latent information channel and control the emphasis on learning statistically independent latent\nfactors. 3-VAE with 8 = 1 corresponds to the original VAE framework\n[Rezende et al.|[2014). With 3 > 1 the model is pushed to learn a more efficient latent representation\nof the data, which is disentangled if the data contains at least some underlying factors of variation\nthat are independent. We show that this simple modification allows G-VAE to significantly improve\nthe degree of disentanglement in learnt latent representations compared to the unmodified VAE\n\nframework (Kingma & Welling]|2014 2014). Furthermore, we show that 6-VAE\nate of the art disentang\n\nachieves sti ing performance against both the best unsupervised (InfoGAN:\n\n2016) and semi-supervised (DC-IGN: {Kulkarni et al.|/2015) approaches for disentangled\na\n\nctor learning on a number of benchmark datasets, such as CelebA (Liu et al. (2015), chairs\n\net al.| 2014) and faces (Paysan et al. 2009) using qualitative evaluation. Finally, to help quantify\nthe differences, we develop a new measure of disentanglement and show that 8-VAE significantly\noutperforms all our baselines on this measure (ICA, PCA, VAE|Kingma & Bal (2014), DC-IGN\nand InfoGAN [Chen et al|(2016)).\nOur main contributions are the following: 1) we propose 6-VAE, a new unsupervised approach for\nlearning disentangled representations of independent visual data generative factors; 2) we devise a\nprotocol to quantitatively compare the degree of disentanglement learnt by different models; 3) we\ndemonstrate both qualitatively and quantitatively that our -VAE approach achieves state-of-the-art\ndisentanglement performance compared to various baselines on a variety of complex datasets.\nIn this paper we attempt to address these issues. We propose 3-VAE, a deep unsupervised generative\napproach for disentangled factor learning that can automatically discover the independent latent\nfactors of variation in unsupervised data. Our approach is based on the variational autoencoder (VAE)\n\nframework (Kingma & Welling} |2014] [Rezende et al.||2014), which brings scalability and training\n\nstability. While the original VAE work has been shown to achieve limited disentangling performance\n\non simple datasets, such as FreyFaces or MNIST (Kingma & Welling||2014), disentangling perfor-\nmance does not scale to more complex datasets (e.g.JAubry et al.|]2014| 2009)\n\n2015), prompting the development of more elaborate semi-supervised VAE-based approaches for\n\nlearning disentangled factors (e.\u00a2./Kulkarni et al.|/2015|/Karaletsos et al.|[2016).\nVOTPCS SEEHS COZH4\nCVPOS SEEMS COPHS\nZOVONS SVOHY OVEHE\nODOoHL SCOSHY OVO HE\nSHHHS OOSHY OVO HG\n\nVVBVS Sees Overy\nBOVE Sesos COVeEDs\n\nBREE CCPSD POPHH\n\nVeVeldD SEDSS OUdBDe\nZVeSOD P2DS\u00a5 evere\nOSSD PPSDY evans\nE@SSrD PPSoe euune\n\nDSSOD SPPeese eounse\n\nSVOVS OVESW DADDY\nZBPOVSs HVESY DADDY\n[OPOSSe SVODW D&ESE\n\n\u201cSPOSS S2aDS Se0e\n\nSDPSS S8eps SOOSe\n\n(uolje}04) yyNWIZy (e) 6uy6r7 (q) uolyeAa|3 (2)\nFigure 3: Manipulating latent variables on 3D faces: Qualitative results comparing disentangling\nperformance of 6-VAE (3 = 20), VAE (Kingma & Welling} |2014) (6 = 1), InfoGAN (Chen et al.\n2016) and DC-IGN (Kulkarni et al.||2015). InfoGAN traversal is over the [-1, 1] range. All models\n\nlearnt to disentangle lighting (b) and elevation (c). DC-IGN and VAE struggled to continuously\ninterpolate between different azimuth angles (a), unlike 3-VAE, which additionally learnt to encode a\nwider range of azimuth angles than other models. InfoGAN and DC-IGN images adapted from|Chen\n\net al.|(2016) and|Kulkarni et al.}(2015), respectively. Reprinted with permission.\n(b) Age/gender (c) Image saturation\nFigure 4: Latent factors learnt by G-VAE on celebA: traversal of individual latents demonstrates\nthat G-VAE discovered in an unsupervised manner factors that encode skin colour, transition from an\nelderly male to younger female, and image saturation.\nLet D = {X,V,W} be the set that consists of images x \u20ac RY and two sets of ground truth dat.\ngenerative factors: conditionally independent factors v \u20ac R*, where log p(v|x) = >, log p(ve|x)\nand conditionally dependent factors w \u20ac R\u201d. We assume that the images x are generated by the\ntrue world simulator using the corresponding ground truth data generative factors: p(x|v,w) =\nSim(v, w).\nWe want to develop an unsupervised deep generative model that, using samples from X only, can\nlearn the joint distribution of the data x and a set of generative latent factors z (2 \u20ac R\u2122, where\nM > K) such that z can generate the observed data x; that is, p(x|z) \u00a9 p(x|v, w) = Sim(v, w).\nThus a suitable objective is to maximise the marginal (log-)likelihood of the observed data x in\nexpectation over the whole distribution of latent factors z:\nFor a given observation x, we describe the inferred posterior configurations of the latent factors z by\na probability distribution gq (z|x). Our aim is to ensure that the inferred latent factors q4(z|x) capture\nthe generative factors v in a disentangled manner. The conditionally dependent data generative\nfactors w can remain entangled in a separate subset of z that is not used for representing v. In ordet\nto encourage this disentangling property in the inferred q4(z|x), we introduce a constraint over it by\ntrying to match it to a prior p(z) that can both control the capacity of the latent information bottleneck\nand embodies the desiderata of statistical independence mentioned above. This can be achieved if\nwe set the prior to be an isotropic unit Gaussian (p(z) = N\u2019(0, J)), hence arriving at the constrained\noptimisation problem in Eq.[2| where \u20ac specifies the strength of the applied constraint.\nF (0, 6, 8; x, 2) = Eq,(2\\x) [log pa(x|z)] \u2014 8 (Dx (qe(2|x)||p(z)) \u2014 \u20ac)\nwhere the KKT multiplier { is the regularisation coefficient that constrains the capacity of the laten\ninformation channel z and puts implicit independence pressure on the learnt posterior due to th\nisotropic nature of the Gaussian prior p(z). Since 3, \u20ac > 0 according to the complementary slacknes\nKKT condition, Eq. [3]can be re-written to arrive at the G-VAE formulation - as the familiar variationa\n\nfree energy objective function as described by (1999), but with the addition of the /\ncoefficient:\nF(0, 6, 8; x, 2) > LO, 6; x, 2, 8) = Ey,(z)x) [log pe(x|z)] \u2014 8 Dex (qe(2|x)||p(z))\nVarying \u00a7 changes the degree of applied learning pressure during training, thus encouraging different\nlearnt representations. -VAE where ( = 1 corresponds to the original VAE formulation of (Kingma\n). We postulate that in order to learn disentangled representations of the conditionally\nindependent data generative factors v, it is important to set 6 > 1, thus putting a stronger constraint\non the latent bottleneck than in the original VAE formulation of[Kingma & Welling] (2014). These\nconstraints limit the capacity of z, which, combined with the pressure to maximise the log likelihood\nof the training data x under the model, should encourage the model to learn the most efficient\nrepresentation of the data. Since the data x is generated using at least some conditionally independent\nground truth factors v, and the Dx 7, term of the 5-VAE objective function encourages conditional\nindependence in qy(z|x), we hypothesise that higher values of 8 should encourage learning a\ndisentangled representation of v. The extra pressures coming from high 3 values, however, may\ncreate a trade-off between reconstruction fidelity and the quality of disentanglement within the learnt\nlatent representations. Disentangled representations emerge when the right balance is found between\ninformation preservation (reconstruction cost as regularisation) and latent channel capacity restriction\n(3 > 1). The latter can lead to poorer reconstructions due to the loss of high frequency details when\npassing through a constrained latent bottleneck. Hence, the log likelihood of the data under the learnt\nmodel is a poor metric for evaluating disentangling in 6-VAEs. Instead we propose a quantitative\nmetric that directly measures the degree of learnt disentanglement in the latent representation.\nSince our proposed hyperparameter (3 directly affects the degree of learnt disentanglement, we would\nlike to estimate the optimal { for learning a disentangled latent representation directly. However, it is\nnot possible to do so. This is because the optimal 3 will depend on the value of \u00a2 in Eq? Different\ndatasets and different model architectures will require different optimal values of \u00ab. However, when\noptimising @ in Eq.|4| we are indirectly also optimising \u00a2 for the best disentanglement (see Sec|A.7\nfor details), and while we can not learn the optimal value of 8 directly, we can instead estimate it\nusing either our proposed disentanglement metric (see Sec.[3) or through visual inspection heuristics.\nmax Ep, (z) [po(x|z)]\nmax Ey~ E y x\nnax Be p | ay(alx) log pa(x|z)]] subject to Die (qa(z|x)||p(z)) <\nd \u20ac\nX11 411\n\n| + [ae\n(e}\n\nl=1\nfe)\nX21 221\n5\n'\nX1,L0 ZL\n\nme ne B\n~ Bp hw]\n\nX2,L 2.1\n\nZ\n\nL-\n\nGoer\n7\n\naan\n\nZain\n\nPosition X\nPosition Y\n\nScale\nRotation\n\ncs\nQ\n[e)\n[e)\n[e)\n\nma P(yleaier)\n. 8 Linear\niO\n\n2b\n\n3 DISENTANGLEMENT METRIC\nNote that a representation consisting of independent latents is not necessarily disentangled, according\nto our desiderata. Independence can readily be achieved by a variety of approaches (such as PCA o\nICA) that learn to project the data onto independent bases. Representations learnt by such approache:\ndo not in general align with the data generative factors and hence may lack interpretability. For thi:\nreason, a simple cross-correlation calculation between the inferred latents would not suffice as\ndisentanglement metric.\nJur proposed disentangling metric, therefore, measures both the independence and interpretabili\ndue to the use of a simple classifier) of the inferred latents. To apply our metric, we run inferenc\nmn a number of images that are generated by fixing the value of one data generative factor whil\nandomly sampling all others. If the independence and interpretability properties hold for the inferre\nepresentations, there will be less variance in the inferred latents that correspond to the fixed generativ\nactor. We use a low capacity linear classifier to identify this factor and report the accuracy value <\nhe final disentanglement metric score. Smaller variance in the latents corresponding to the targ:\nactor will make the job of this classifier easier, resulting in a higher score under the metric. Se\n5|for a representation of the full process.\nMore formally, we start from a dataset D = {X, V, W} as described in Sec.|2| assumed to contain a\nbalanced distribution of ground truth factors (v, w), where images data points are obtained using a\nground truth simulator process x ~ Sim(v, w). We also assume we are given labels identifying a\nsubset of the independent data generative factors v \u20ac V for at least some instances.\n\nWe then canctrict a hatch of RB vectare 72... ta he fed ac innite to a linear claceifier ac fallawe:\nFigure 5: Schematic of the proposed disen-\ntanglement metric: over a batch of L samples,\neach pair of images has a fixed value for one\ntarget generative factor y (here y = scale)\nand differs on all others. A linear classifier\nis then trained to identify the target factor us-\ning the average pairwise difference z4,,, in the\nlatent space over L samples.\nIt is important to be able to quantify the level of disentanglement achieved by different models.\nDesigning a metric for this, however, is not straightforward. We begin by defining the properties\nthat we expect a disentangled representation to have. Then we describe our proposed solution for\nquantifying the presence of such properties in a learnt representation.\n\\s stated above, we assume that the data is generated by a ground truth simulation process whic!\n1ses a number of data generative factors, some of which are conditionally independent, and we als\nissume that they are interpretable. For example, the simulator might sample independent factor\n-orresponding to object shape, colour and size to generate an image of a small green apple. Becaus\n\u00bbf the independence property, the simulator can also generate small red apples or big green apple:\n\\ representation of the data that is disentangled with respect to these generative factors, i.e. whicl\nncodes them in separate latents, would enable robust classification even using very simple linea\n\u2018lassifiers (hence providing interpretability). For example, a classifier that learns a decision boundar\nhat relies on object shape would perform as well when other data generative factors, such as size o\n\u2018olour, are varied.\nThe classifier\u2019s goal is to predict the index y of the generative factor that was kept fixed for a giver\n\nzi... The accuracy of this classifier over multiple batches is used as our disentang\n\nlement metric score\n\nWe choose a linear classifier with low VC-dimension in order to ensure it has no capacity to perform\nnonlinear disentangling by itself. We take differences of two inferred latent vectors to reduce the\nvariance in the inputs to the classifier, and to reduce the conditional dependence on the inputs x. Thi:\n\nensures that on average [z)i\u00a2+| y< [24 \u00ab| (\\yy: See Equations\nthe process.\n\nfor more details 0:\n\n[in Appendix{A.4"}, {"section_index": "4", "section_name": "4.1 QUALITATIVE BENCHMARKS", "section_text": "We trained 3-VAE (see Tol. [I]for architecture details) on a variety of datasets commonly used tc\n\nevaluate disentangling performance of models: celebA (Liu et al.|[2015), chairs\nand faces (Paysan et al.|/2009). Figures|1{3|provide a qualitative comparison of the disentanglins\nperformance of B-VAE, VAE (3 = 1) (Kingma & Welling||2014), InfoGAN anc\nDC-IGN as appropriate.\nIt can be seen that across all datasets G-VAE is able to automatically discover and learn to disentangle\nall of the factors learnt by the semi-supervised DC-IGN (Kulkarni et al.|[2015): azimuth (Fig. [3p.\nFig. 2h), lighting and elevation (Fig.|3p,c)). Often it acts as a more convincing inverse graphics\nnetwork than DC-IGN (e.g. Fig. 3h) or InfoGAN (e.g. Fig.[2h, Fig. [Ip-c or Fig.[3h). Furthermore.\nunlike DC-IGN, 3-VAE requires no supervision and hence can learn about extra unlabelled data\ngenerative factors that DC-IGN can not learn by design, such as chair width or leg style (Fig. [2p,c),\nThe unsupervised InfoGAN approach shares this quality with 6-VAE, and the two\nframeworks tend to discover overlapping, but not necessarily identical sets of data generative factors.\nFor example, both 3-VAE and InfoGAN (but not DC-IGN) learn about the width of chairs (Fig.2p\nOnly 8-VAE, however, learns about the chair leg style (Fig. [2p). It is interesting to note how 6-VAE\nis able to generate an armchair with a round office chair base, even though such armchairs do not exist\nin the dataset (or, perhaps, reality). Furthermore, only 3-VAE is able to discover all three factors of\nvariation (chair azimuth, width and leg style) within a single model, while InfoGAN learns to allocate\nits continuous latent variable to either azimuth or width. InfoGAN sometimes discovers factors that\nG-VAE does not precisely disentangle, such as the presence of sunglasses in celebA. 3-VAE does,\nhowever, discover numerous extra factors such as skin colour, image saturation, and age/gender that\nare not reported in the InfoGAN paper [2016) (Fig.[4). Furthermore, -VAE latents tend\nto learn a smooth continuous transformation over a wider range of factor values than InfoGAN (e.g.\nrotation over a wider range of angles as shown in Figs. [T]3p).\n(a) Sample two sets of latent representations, v,, and v2, enforcing [vy], =\n[vou], if k = y (so that the value of factor k = y is kept fixed).\n\n(b) Simulate image x;; ~ Sim(v,1), then infer z;; = j(x;,,), using the encoder\nq(z|x) ~ N (n(x), o(x)).\nRepeat the process for v2).\n\n(c) Compute the difference Bice = |Z1,. \u2014 Z2,1|, the absolute linear difference between the\ninferred latent representations.\n\n3. Use the average 2... = i ra Zh ice tO predict P(y| zee) (again, y = scale in Fig. 5p and\nreport the accuracy of this predictor as disentangement metric score.\nIn this section we first qualitatively demonstrate that our proposed 3-VAE framework consistently\ndiscovers more latent factors and disentangles them in a cleaner fashion that either unmodified VAE\n2014) or state of the art unsupervised (InfoGAN: {Chen et al.|[2016) and semi-\nsupervised (DC-IGN: (2015) solutions for disentangled factor learning on a variety\nof benchmarks. We then quantify and characterise the differences in disentangled factor learning\nbetween our 3-VAE framework and a variety of benchmarks using our proposed new disentangling\nmetric.\nOverall 3-VAE tends to consistently and robustly discover more latent factors and learn cleanet\ndisentangled representations of them than either InfoGAN or DC-IGN. This holds even on such\nchallenging datasets as celebA. Furthermore, unlike InfoGAN and DC-IGN, 8-VAE requires no\ndesign decisions or assumptions about the data, and is very stable to train.\nWhen compared to the unmodified VAE baseline (3 = 1) 6-VAE consistently learns significanth\nmore disentangled latent representations. For example, when learning about chairs, VAE entangle:\nchair width with leg style (Fig. 2p). When learning about celebA, VAE entangles azimuth witl\nemotion and gender (Fig. [Th); emotion with hair style, skin colour and identity (Fig.{Ip); while th\nVAE fringe latent also codes for baldness and head size (Fig. [Ip). Although VAE performs relativel\nwell on the faces dataset, it still struggles to learn a clean representation of azimuth (Fig. Bh). Thi:\nhowever, suggests that a continuum of disentanglement quality exists, and it can be traversed by\nvarying (3 within the 6-VAE framework. While increasing 3 often leads to better disentanglemen\nit may come at the cost of blurrier reconstructions and losing representations for some factors\nparticularly those that correspond to only minor changes in pixel space."}, {"section_index": "5", "section_name": "4.2 QUANTITATIVE BENCHMARKS", "section_text": "In order to quantitatively compare the disentangling performance of 3-VAE against various baselines\nwe created a synthetic dataset of 737,280 binary 2D shapes (heart, oval and square) generated from\nthe Cartesian product of the shape and four independent generative factors v; defined in vectot\ngraphics: position X (32 values), position Y (32 values), scale (6 values) and rotation (40 values ove\nthe 27 range). To ensure smooth affine object transforms, each two subsequent values for each facto!\nvx Were chosen to ensure minimal differences in pixel space given 64x64 pixel image resolution\nThis dataset was chosen because it contains no confounding factors apart from its five independent\ndata generative factors (identity, position X, position Y, scale and rotation). This gives us knowledge\nof the ground truth for comparing the disentangling performance of different models in an objective\nmanner.\nWe used our proposed disentanglement metric (see Sec. 3) to quantitatively compare the ability of\nG-VAE to automatically discover and learn a disentangled representation of the data generative factors\nof the synthetic dataset of 2D shapes described above with that of a number of benchmarks (see\nTol. [Ijin Appendix for model architecture details). The table in Fig. (6](left) reports the classification\naccuracy of the disentanglement metric for 5,000 test samples. It can be seen that G-VAE (3 = 4)\nsignificantly outperforms all baselines, such as an untrained VAE and the original VAE formulation\nof|Kingma & Welling] (2014) (8 = 1) with the same architecture as 3-VAE, the top ten PCA or ICA\ncomponents of the data (see Sec.[A.3|for details), or when using the raw pixels directly. 3-VAE also\ndoes better than InfoGAN. Remarkably, 3-VAE performs on the same level as DC-IGN despite the\nlatter being semi-supervised and the former wholly unsupervised. Furthermore, 6-VAE achieved\nsimilar classification accuracy as the ground truth vectors used for data generation, thus suggesting\nthat it was able to learn a very good disentangled representation of the data generative factors.\nWe also examined qualitatively the representations learnt by 6-VAE, VAE, InfoGAN and DC-IGN\non the synthetic dataset of 2D shapes. Fig.[7A demonstrates that after training, 3-VAE with 6 = 4\nlearnt a good (while not perfect) disentangled representation of the data generative factors, and\nits decoder learnt to act as a rendering engine. Its performance was comparative to that of DC-\nIGN (Fig. (70). with the difference that DC-IGN required a priori knowledge about the quantity\nof the data generative factors, while 6-VAE was able to discover them in an unsupervised manner.\nThe most informative latent units z,,, of G-VAE have the highest KL divergence from the unit\nGaussian prior (p(z) = N(0,J)), while the uninformative latents have KL divergence close to\nzero. Fig. demonstrates the selectivity of each latent z,,, to the independent data generating\nfactors: 24, = f (vn) Vue \u00a9 {UpositionXs UpositionY + Uscales Urotation } (top three rows), where 24\", is\nthe learnt Gaussian mean of latent unit z,,. The effect of traversing each latent z,,, on the resulting\nreconstructions is shown in the bottom five rows of Fig. (7A. The latents zg and z2 learnt to encode X\nand Y coordinates of the objects respectively; unit z; learnt to encode scale; and units z5 and z7 learnt\nto encode rotation. The frequency of oscillations in each rotational latent corresponds to the rotational\nsymmetry of the corresponding object (27 for heart, 7 for oval and 7/2 for square). Furthermore.\nthe two rotational latents seem to encode cos and sin rotational coordinates, while the positional\nlatents align with the Cartesian axes. While such alignment with intuitive factors for humans is not\nsuaranteed, empirically we found it to be very common. Fig./7B demonstrates that the unmodified\nFigure 6: Disentanglement metric classification accuracy for 2D shapes dataset. Left: Accuracy for\ndifferent models and training regimes Right: Positive correlation is present between the size of z and\nthe optimal normalised values of (3 for disentangled factor learning for a fixed G-VAE architecture. 3\nvalues are normalised by latent z size m and input x size n. Note that 6 values are not uniformly\nsampled. Orange approximately corresponds to unnormalised 3 = 1. Good reconstructions are asso-\nciated with entangled representations (lower disentanglement scores). Disentangled representations\n(high disentanglement scores) often result in blurry reconstructions.\nVAE baseline (8 = 1) is not able to disentangle generative factors in the data as well as 3-VAE with\nappropriate learning pressures. Instead each latent z (apart from zg, which learnt rotation) encodes at\nleast two data generative factors. InfoGAN also achieved a degree of disentangling (see Fig.|7D)\nparticularly for positional factors. However, despite our best efforts to train InfoGAN, we were not\nable to achieve the same degree of disentangling in other factors, such as rotation, scale and shape\nWe also found its ability to generate the different shapes in the dataset to be inaccurate and unstable\nduring training, possibly due to reported limitations of the GAN framework, which can struggle to\nlearn the full data distribution and instead will often learn a small subset of its modes (Salimans et al.\n\n[2016||Zhao et al.|/20T6).\nUnderstanding the effects of { We hypothesised that constrained optimisation is important for\nenabling deep unsupervised models to learn disentangled representations of the independent date\ngenerative factors (Sec In the 6-VAE framework this corresponds to tuning the 6 coefficient. One\nway to view { is as a mixing coefficient (see Sec. for a derivation) for balancing the magnitude:\nof gradients from the reconstruction and the prior-matching components of the VAE lower bounc\nformulation in Eq. [4]during training. In this context it makes sense to normalise ( by latent z size\nm and input x size n in order to compare its different values across different latent layer sizes\nand different datasets (Bnorm = BM ). We found that larger latent z layer sizes m require highe\nconstraint pressures (higher 3 values), see Fig. [6|Right). Furthermore, the relationship of \u00a7 for <\ngiven m is characterised by an inverted U curve. When \u00a3 is too low or too high the model learns ar\nentangled latent representation due to either too much or too little capacity in the latent z bottleneck\nWe find that in general 6 > 1 is necessary to achieve good disentanglement. However if { is toc\nhigh and the resulting capacity of the latent channel is lower than the number of data generative\nfactors, then the learnt representation necessarily has to be entangled (as a low-rank projection o!\nthe true data generative factors will compress them in a non-factorial way to still capture the ful\ndata distribution well). We also note that VAE reconstruction quality is a poor indicator of learn\ndisentanglement. Good disentangled representations often lead to blurry reconstructions due to the\nrestricted capacity of the latent information channel z, while entangled representations often result ir\nthe sharpest reconstructions. We therefore suggest that one should not necessarily strive for perfec\nreconstructions when using 3-VAEs as unsupervised feature learners - though it is often possible.\nto find the right 6-VAE architecture and the right value of 6 to have both well disentangled laten\nrepresentations and good reconstructions.\nWe proposed a principled way of choosing 3 for datasets with at least weak label information. If\nlabel information exists for at least a small subset of the independent data generative factors of\nvariation, one can apply the disentanglement metric described in Sec.B]to approximate the level of\nlearnt disentanglement for various 6 choices during a hyperparameter sweep. When such labelled\ninformation is not available, the optimal value of 3 can be found through visual inspection of what\nModel Disentanglement\nmetric score\nGround truth 100%\nRaw pixels 45.75 + 0.8%\nPCA 84.9 + 0.4%\nICA 42.03 + 10.6%\nDC-IGN 99.3+0.1%\nInfoGAN 73.5 + 0.9%\nVAE untrained 44.14 +2.5%\nVAE 61.58 + 0.5%\nB-VAE 99.23 40.1%\n\nDisentanglement Metric Score\n(normalised)\n10 to\nOriginal\n|\n\n0.75\n\nReconstruction\n\nB (normalised)\n2EMSSLSESDC ALM ROSES SSIS 2ZGSSsHE\n\n2 Z\nze 4\n= &,\n5 a\nS \u2018| By\ngs e a\u201d\ns2 7 os\nqe & ne\nRg 8 Se\noF sg\nax A a>\n23 28 53\n\nCc\n\nuonsod aes uoneiG: \u00a2 <0 \u20ac- uonsod Sess UOREIGI \u00a2 <9 9\u00a2-\napueuen\n\nasuodsas quayey ueaw jesuanes ueawquaye) albuls suis,\neffect the traversal of each single latent unit z,,, has on the generated images (x|z) in pixel space\n(as shown in Fig.|7|/rows 4-8). For the 2D shapes dataset, we have found that the optimal values\nof @ as determined by visual inspection match closely the optimal values as determined by the\ndisentanglement metric.\nused\n99\n\nFigure 7: A: Representations learnt by\na B-VAE (6 = 4). Each column repre-\nsents a latent z;, ordered according to\nthe learnt Gaussian variance (last row).\nRow | (position) shows the mean acti-\nvation (red represents high values) of\neach latent z; as a function of all 32x32\nlocations averaged across objects, rota-\ntions and scales. Row 2 and 3 show the\nmean activation of each unit z; as a func-\ntion of scale (respectively rotation), av-\neraged across rotations and positions (re-\nspectively scales and positions). Square\nis red, oval is green and heart is blue.\nRows 4-8 (second group) show recon-\nstructions resulting from the traversal\nof each latent z; over three standard de-\nviations around the unit Gaussian prior\nmean while keeping the remaining 9/10\nlatent units fixed to the values obtained\nby running inference on an image from\nthe dataset. B: Similar analysis for VAE\n(6 = 1). C: Similar analysis for DC-\nIGN, clamping a single latent each for\nscale, positions, orientation and 5 for\nshape. D: Similar analysis for InfoGAN,\nusing 5 continuous latents regularized\nusing the mutual information cost, and\n5 additional unconstrained noise latents\n(not shown).\naper we have reformulated the standard VAE framework (Kingma & Welling] |2014}[Rezende\n\n) as a constrained optimisation problem with strong latent capacity constraint and in-\ndependence prior pressures. By augmenting the lower bound formulation with the { coefficient\nthat regulates the strength of such pressures and, as a consequence, the qualitative nature of the\nrepresentations learnt by the model, we have achieved state of the art results for learning disentangled\nrepresentations of data generative factors. We have shown that our proposed 6-VAE framework\nsignificantly outperforms both qualitatively and quantitatively the original VAE\n2014), as well as state-of-the-art unsupervised (InfoGAN: |Chen et al.|/2016) and semi-supervised\n(DC-IGN: (2015) approaches to disentangled factor learning. Furthermore, we have\n3-VA\n\nshown that 5-VAE consistently and robustly discovers more factors of variation in the data, and it\nlearns a representation that covers a wider range of factor values and is disentangled more cleanly\nthan other benchmarks, all in a completely unsupervised manner. Unlike InfoGAN and DC-IGN,\nour approach does not depend on any a priori knowledge about the number or the nature of data\ngenerative factors. Our preliminary investigations suggest that the performance of the 3-VAE frame-\nwork may depend on the sampling density of the data generative factors within a training dataset\n(see Appendix[A-8]for more details). It appears that having more densely sampled data generative\nfactors results in better disentangling performance of 6-VAE, however we leave a more principled\ninvestication of this effect to future work.\n8-VAE is robust with respect to different architectures, optimisation parameters and datasets, henc\nrequiring few design decisions. Our approach relies on the optimisation of a single hyperparamete\n8, which can be found directly through a hyperparameter search if weakly labelled data is availabl\nto calculate our new proposed disentangling metric. Alternatively the optimal 6 can be estimate\nheuristically in purely unsupervised scenarios. Learning an interpretable factorised representatio\nof the independent data generative factors in a completely unsupervised manner is an importar\nprecursor for the development of artificial intelligence that understands the world in the same wa\nthat humans do (Lake et al.|{2016). We believe that using our approach as an unsupervised pretrainin\nstage for supervised or reinforcement learning will produce significant improvements for scenario\nsuch as transfer or fast learning.\nWe would like to thank Charles Blundell, Danilo Rezende, Tejas Kulkarni and David Pfau for helpful\ncomments that improved the manuscript."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "REFERENCES\n\nM. Aubry, D. Maturana, A. Efros, B. Russell, and J. Sivic. Seeing 3d chairs: exemplar part-based\n2d-3d alignment using a large dataset of cad models. In CVPR, 2014.\n\nY. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. In\nIEEE Transactions on Pattern Analysis & Machine Intelligence, 2013.\n\nXi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan:\nInterpretable representation learning by information maximizing generative adversarial nets. arXiv,\n2016.\n\nBrian Cheung, Jesse A. Levezey, Arjun K. Bansal, and Bruno A. Olshausen. Discovering hidden\nfactors of variation in deep networks. In Proceedings of the International Conference on Learning\nRepresentations, Workshop Track, 2015.\n\nT. Cohen and M. Welling. Transformation properties of learned visual representations. In JCLR,\n2015.\n\nTaco Cohen and Max Welling. Learning the irreducible representations of commutative lie groups.\narXiv, 2014.\n\nG. Desjardins, A. Courville, and Y. Bengio. Disentangling factors of variation via generative\nentangling. arXiv, 2012.\n\nCarl Doersch. Tutorial on variational autoencoders. arxiv, 2016.\n\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and\nstochastic optimization. Journal of Machine Learning Research, 2011.\n\nI. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and\nY. Bengio. Generative adversarial nets. NIPS, pp. 2672-2680, 2014.\n\nRoss Goroshin, Michael Mathieu, and Yann LeCun. Learning to linearize under uncertainty. N/PS,\n2015.\n\nG. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. International Conference\non Artificial Neural Networks, 2011.\n\nMichael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to\nvariational methods for graphical models. Machine learning, 37(2):183\u2014233, 1999.\n\nTheofanis Karaletsos, Serge Belongie, and Gunnar Ratsch. Bayesian representation learning with\noracle constraints. CLR, 2016.\n\nW. Karush. Minima of Functions of Several Variables with Inequalities as Side Constraints. Master\u2019s\nthesis, Univ. of Chicago, Chicago, Illinois, 1939.\n\nD. P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv, 2014.\n\nD. P. Kingma and M. Welling. Auto-encoding variational bayes. JCLR, 2014.\n\nH. W. Kuhn and A. W. Tucker. Nonlinear programming. In Proceedings of 2nd Berkeley Symposium,\npp. 481-492, 1951.\n\nTejas Kulkarni, William Whitney, Pushmeet Kohli, and Joshua Tenenbaum. Deep convolutional\ninverse graphics network. N/PS, 2015.\n\nBrenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building\nmachines that learn and think like people. arXiv, 2016.\nA summary of all model architectures used in this paper can be seen in TbI[T]"}, {"section_index": "7", "section_name": "A.2. INFOGAN TRAINING", "section_text": "To train the InfoGAN network described in Tbl.[I]on the 2D shapes dataset (Fig. [h. we followed\nthe training paradigm described in{Chen et al.](2016) with the following modifications. For the\nmutual information regularised latent code, we used 5 continuous variables c; sampled uniformly\nfrom (\u20141,1). We used 5 noise variables z;, as we found that using a reduced number of noise\nvariables improved the quality of generated samples for this dataset. To help stabilise training, we\nused the instance noise trick described in|Shi et al.|(2016), adding Gaussian noise to the discriminator\n\ninputs (0.2 standard deviation on images scaled to [\u20141, 1]). We followed|Radford et al.|(2015) for the\nall Tayers except the\n\narchitecture of the convolutional layers, and used batch normalisation in last in\nthe generator and the first in the discriminator.\nZ. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. CCV, 2015.\n\nP. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter. A 3d face model for pose and\nillumination invariant face recognition. AVSS, 2009.\n\nFabian Pedregosa, Ga\u00e9l Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier\nGrisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas,\nAlexandre Passos, and David Cournapeau. Scikit-learn: Machine learning in python. Journal of\nMachine Learning Research, 2011.\n\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep\nconvolutional generative adversarial networks. arXiv, 2015.\n\nScott Reed, Kihyuk Sohn, Yuting Zhang, and Honglak Lee. Learning to disentangle factors of\nvariation with manifold interaction. ICML, 2014.\n\nDanilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi-\nmate inference in deep generative models. arXiv, 2014.\n\nKarl Ridgeway. A survey of inductive bi for factorial Representation-Learning. arXiv, 2016.\nURL|http://arx 05299\n\nOren Rippel and Ryan Prescott Adams. High-dimensional probability estimation with deep density\nmodels. arXiv, 2013.\n\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.\nImproved techniques for training GANs. arXiv, 2016. URL|http: //arxiv.org/abs/1606|\n03498\n\nJiirgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation,\n4(6):863-869, 1992.\n\nWenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel\nRueckert, and Zehan Wang. Real-Time single image and video Super-Resolution using an efficient\nSub-Pixel convolutional neural network. arXiv, 2016.\n\nYichuan Tang, Ruslan Salakhutdinov, and Geoffrey Hinton. Tensor analyzers. In Proceedings of the\n30th International Conference on Machine Learning, 2013, Atlanta, USA, 2013.\n\nWilliam F. Whitney, Michael Chang, Tejas Kulkarni, and Joshua B. Tenenbaum. Understanding\nvisual concepts with continuation learning. arXiv, 2016. URL|http://arxiv.org/pdf/|\n1602.06822.pdf\n\nJimei Yang, Scott Reed, Ming-Hsuan Yang, and Honglak Lee. Weakly-supervised disentangling with\nrecurrent transformations for 3d view synthesis. N/PS, 2015.\n\nJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv,\n2016. URL|http://arxiv.org/abs/1609.03126|\n\nZ. Zhu, P. Luo, X. Wang, and X. Tang. Multi-view perceptron: a deep model for learning face identity\n\nand view renrecentatinne Tn Adqyvranrec in Noywral Infnarmnatinn Pronreceino Gyictome 97 WN1A"}, {"section_index": "8", "section_name": "A.3. ICA AND PCA BASELINES", "section_text": "In order to calculate the ICA benchmark, we applied fastICA algorithn\n\nto the whitened pixel data. Due to memory limitations we had to apply the algorithm to pairwis\ncombinations of the subsets of the dataset corresponding to the transforms of each of the three 2I\nobject identities. We calculated the disentangling metric for all three ICA models trained on each o\nthe three pairwise combinations of 2D objects, before presenting the average of these scores in Fig. |\nWe performed PCA on the raw and whitened pixel data. Both approaches resulted in similar\ndisentangling metric scores. Fig. [6]reports the PCA results calculated using whitened pixel data fot\nmore direct comparison with the ICA score.\nWe used a linear classifier to learn the identity of the generative factor that produced zi (see\nEquations (5) for the process used to obtain samples of z2.,.). We used a fully connected linear\n2D shapes _\u2014 Adagrad Input 4096 (flattened 64x64x1).\n(VAE) le-2 Encoder FC 1200, 1200. ReLU activation.\nLatents 10\nDecoder FC 1200, 1200, 1200, 4096. Tanh activation. Bernoulli.\n2D shapes Input 64x64x1.\n(DC-IGN) Encoder Conv 96x3x3, 48x3x3, 48x3x3 (padding 1).\nReLU activation and Max pooling 2x2.\nLatents 10\nDecoder Unpooling, Conv 48x3x3, 96x3x3, 1x3x3.\nReLU activation, Sigmoid.\n2D shapes Adam Generator FC 256, 256, Deconv 128x4x4, 64x4x4 (stride 2). Tanh.\n(InfoGAN) _ le-3 (gen) Discriminator Conv and FC reverse of generator. Leaky ReLU activation.\n2e-4 (dis) FC 1. Sigmoid activation.\nRecognition Conv and FC shared with discriminator. FC 128, 5. Gaussian\nLatents 10: z4...5 ~ Unif(\u20141, 1), e1...5 ~ Unif(-1, 1)\nChairs Adam Input 64x64x1.\n(VAE) le-4 Encoder Conv 32x4x4 (stride 2), 32x4x4 (stride 2), 64x4x4 (stride 2),\n64x4x4 (stride 2), FC 256. ReLU activation.\nLatents 32\nDecoder Deconv reverse of encoder. ReLU activation. Bernoulli.\nCelebA Adam Input 64x64x3.\n(VAE) le-4 Encoder Conv 32x4x4 (stride 2), 32x4x4 (stride 2), 64x4x4 (stride 2),\n64x4x4 (stride 2), FC 256. ReLU activation.\nLatents 32\nDecoder Deconv reverse of encoder. ReLU activation. Gaussian.\n3DFaces Adam Input 64x64x1.\n(VAE) le-4 Encoder Conv 32x4x4 (stride 2), 32x4x4 (stride 2), 64x4x4 (stride 2),\n64x4x4 (stride 2), FC 256. ReLU activation.\nLatents 32\nDecoder Deconv reverse of encoder. ReLU activation. Bernoulli.\nD={VER*,WER\u2019,X ER}, y~ Unif[l...K]\nL\n\nLo _ bo 1 1\nZaire = |21,0 \u2014 22,1], Zaire = L Zaitt\n1\nAll disentanglement metric score results reported in the paper were calculated in the following manner\nTen replicas of each model with the same hyperparameters were trained using different random seeds\nto obtain disentangled representations. Each of the ten trained model replicas was evaluated three\ntimes using the disentanglement metric score algorithm, each time using a different random seed\nto initialise the linear classifier. We then discarded the bottom 50% of the thirty resulting scores\nand reported the remaining results. This was done to control for the outlier results from the few\nexperiments that diverged during training.\nThe results reported in table in Fig. [6|(left) were calculated using the following data. Ground truth\nuses independent data generating factors v (our dataset did not contain any correlated data generating\nfactors w). PCA and ICA decompositions keep the first ten components (PCA components explain\n60.8% of variance). B-VAE (8 = 4), VAE (3 = 1) and VAE untrained have the same fully connected\narchitecture with ten latent units z. InfoGAN uses \u201cinferred\u201d values of the five continuous latents that\nwere regularised with the mutual information objective during training."}, {"section_index": "9", "section_name": "\\.5 CLASSIFYING THE GROUND TRUTH DATA GENERATIVE FACTORS VALUES", "section_text": "In order to further verify the validity of our proposed disentanglement metric we ran an extr\u00e9\nquantitative test: we trained a linear classifier to predict the ground truth value of each of the five\ndata generative factors used to generate the 2D shapes dataset. While this test does not measure\ndisentangling directly (since it does not measure independence of the latent representation), <\ndisentangled representation should make such a classification trivial. It can be seen in Table[2]that the\nrepresentation learnt by G-VAE is on average the best representation for factor classification across\nall five factors. It is closely followed by DC-IGN. It is interesting to note that ICA does well only a\nencoding object identity, while PCA manages to learn a very good representation of object position\nclassifier to predict p(y|z4ir), where y is one of four generative factors (position X, position Y, scale\nand rotation). We used softmax output nonlinearity and a negative log likelihood loss function. The\n\nclassifier was trained using the Adagrad (Duchi et al.}|2011) optimisation algorithm with learning\nrate of le-2 until convergence.\n[viul,> fk =:\n\nVil~Y pv), wil~ p(w), w2i\u2122~ p(w), [vou], = { ( ) hi\n~ p(vp), otherwi\nTable 2: Linear classifier classification accuracy for predicting the ground truth values for each data\ngenerative factor from different latent representations. Each factor could take a variable number of\npossible values: 3 for id, 6 for scale, 40 for rotation and 32 for position X or Y. Best performing\nmodel results in each column are printed in bold.\nL(9, 6; x, 2, 8) = Eq, (2}x) [log po (x|z)] \u2014 8 Die (qa(Zlx)||p(z))\nEq, (2\\x) log po(x|z)] = Eq, 2|x) log | | po(tnlz)] = Eq, (ei) (>_, log po (wn|2\nih 7\nL\n(0\no;\n> X.\n4\n\u00bb)\nx\nE\nd.\no(alx)E\nn {lo\ng\npo\nx,\nD\nq\nmn\n(z|x)||\np(z\n))\ndo (2\\x do(2m|>\nPrev (aol) (2) =f aleotog = 5 [golem og A\n\u00a30, 6; x, 2, 8) 6 Bq, \u00a2a}e)En log po(nl2)] ~ 2 Ey | taolembs og\nM\n= Eq, (z|x)En [log po (an|z)] \u2014 a Em(Dx1(46(2m|X)||P(2m))]\n\nN"}, {"section_index": "10", "section_name": "A.7 RELATIONSHIP BETWEEN [3 AND \u20ac", "section_text": "For a given \u20ac we can solve the constrained optimisation problem in Eq.[3|(find the optimal (6*, \u00a2*, 8*)\nsuch that AF (6*, *, 6*) = 0). We can then re-write our optimal solution to the original optimisation\nproblem in Eq.[2Jas a function of e:\nWe hypothesise that data continuity plays a role in guiding unsupervised models towards learning the\ncorrect data manifolds. To test this idea we measure how the degree of learnt disentangling changes\nwith reduced continuity in the 2D shapes dataset. We trained a 3-VAE with 6 = 4 (Figure[7|A) on\nsubsamples of the original 2D shapes dataset, where we progressively decreased the generative facto!\nsampling density. Reduction in data continuity negatively correlates with the average pixel wise\n(Hamming) distance between two consecutive transforms of each object (normalised by the average\nnumber of pixels occupied by each of the two adjacent transforms of an object to account for object\nWe design 3-VAE to learn conditionally independent factors of variation in the data. Hence we\nassume conditional independence of every latent z,, given x (where m \u20ac 1.../, and M is the\ndimensionality of z). Since our prior p(z) is an isotropic unit Gaussian, we can re-write the second\nterm of Eq.[6Jas:\nBnorm =\n\nBM\nin Eq. [10]is equivalent to optimising the original 6-VAE formulation from Sec. [2] but with the\nadditional independence assumptions that let us calculate data log likelihood and KL divergence\nterms in expectation over the individual pixels x,, and individual latents z,,,.\nG(O*(\u20ac), b*(\u20ac)) = Eg,u-.(z/x) Log poe) (x|z)]\nNow \u00a33 can be interpreted as the rate of change of the optimal solution (0*, @*) to G when varying the\nconstraint \u20ac:\n100%\n\n@ Bernoulli\nnoise level\ne @oo\n@oo\nOo\n\n2\n3\n&\n\nDisentanglement Metric Score\n\n20%\n\n0%\n0 1 2\n\nNormalised Average Hamming distance [pixels]\nFigure 8: Negative correlation between data transform continuity and the degree of disentangling\nachieved by 3-VAE. Abscissa is the average normalized Hamming distance between each of the\ntwo consecutive transforms of each object. Ordinate is disentanglement metric score. Disentangling\nperformance is robust to Bernoulli noise added to the data at test time, as shown by slowly degrading\nclassification accuracy up to 10% noise level, considering that the 2D objects occupy on average\nbetween 2-7% of the image depending on scale. Fluctuations in classification accuracy for similar\nHamming distances are due the different nature of subsampled generative factors (i.e. symmetries are\npresent in rotation but are lacking in position).\nWe present extra latent traversal plots from 3-VAE that learnt disentangled representations of 3D\n\nchairs (Figures [IO[11) and CelebA (Figures [1214p datasets. Here we show traversals from all\n\ninformative latents from a large number of seed images.\nscale). Figure[8]demonstrates that as the continuity in the data reduces, the degree of disentanglement\nin the learnt representations also drops. This effect holds after additional hyperparameter tuning anc\ncan not solely be explained by the decrease in dataset size, since the same VAE can learn disentanglec\u00a2\nrepresentations from a data subset that preserves data continuity but is approximately 55% of the\noriginal size (results not shown).\nSamples from 6-VAE that learnt disentangled (3 = 4) and entangled (8 = 1) representations can be\nseen in Figure|9]\nFigure 9: Samples from 6-VAE trained on the dataset of 2D shapes that learnt either a disentanglec\nleft, 8 = 4) or an entangled (right, 6 = 1) representation of the data generative factors. It can be\nseen that sampling from an entangled representation results in some unrealistic looking samples. A\nJisentangled representation that inverts the original data generation process does not suffer from such\n-TTors.\nre SF oP FE +t SF KH eH SE SD\nee ee ee ee ee ee\nree reetee Bet eaes & OO 8\nYr Eereyveayrtvd* Bre F *\nrw te FV D9 Y F FTW EF 8\nry te FY DY DW VY FF FY TF F 8\ncece ee ee\n\nfe ho hk\nh\ni, ee\nree *\n5%\u00bb + +\n(>e@st 42 2\nye\u2019 YY >\n\u00bbeeses\n\u00bb& & +\n,@\u00ae 6\nyee\nFigure 10: Latent traversal plots from $-VAE that learnt disentangled representations on the 3D\nchairs dataset.\nae JS & = \u00a3 sf &t \u00a3\u00a3 + SF CF ee EF t+ SEs Ke E+ SF SF oh\n\nae \u00a3\u00a3 &\u2014& = &\u00a3& Sf t& \u00a3 + KS FEF FS F&F + EF KH EH S&S DS ot\nSec ee CF Oe eee ESE Dew ww\u201d\nCm tee we epee eee * @.remt* \u00a9 + &\u00a7 O\nee a\na a\n\n\u2122\".o. = =\" \u00a9YFV\u2019F WTEYFYYE WY *F FM EF\neee wee ane ew ew eae ee eee\nai/* * \u00a9 =F F&F & Ff OF Fe FF SHOOTS UL eCUc COU er Ue hl\nBee ew ee\nSou fF em rm ew\nSee awe Pe SRP TTY IVP MBH sss\nN@P AMG wMHEPS HSS PS ATIIAsADs SOUS aD I\n\nPsA tGwaes Hos FGaTIIIAVP oOBIADS 1\nae +s &\u2014& = \u00a3\u00a3 fF =F SL + KS FEF HF FSF + KH SF\n= eo se we tet fF ese ese KFT KT HEF tS BD\n& ma cs eereepeertrenaeaertaoereeteete BD\nan , \u00a9e\u2014 = \u00a9Y\u2122eVYr*xeTrTyY ey WY \u00ae\n\u2018 \u2122\". \u00a9 = \u00a9=Y\"=YF\u00a5 WEY VWY WY!\n\n\"oo = =\" \u00a9Y\"=*Y. *\u00a5 TTY VF WY\u2019!\npaneer eee ee Ss ww\nwv ba 3 s x < Lis s \u00ab 4 a x= \u00ab= = \u00e9 4 wv rd :\nN\ni CS a a a, Se a, ee\n= ~ cs et me te we Fe er He EW\nSee awe Pe Sea TT YET WH\u2019\nNMPAMwmwHePsF HPs PATIAA DA\nFigure 11: Latent traversal plots from 6-VAE that learnt disentangled representations on the 3L\nchairs dataset.\nole ee i seat\u2018 \u00ae\nze \u00a5 4%? \u00bb \u00a9 Fe ese SF FS YY F&F FS Y\nDyer ee CPF FH DTD ES Y\na\nSe a a a a a\nwe ce ew eee\nee a a rs eS a 2\nHmm mw YP HK KH KH Bm WH KH H\nme Fee wee, Kee KH KH HW HF KH TR\nee ee ee\nSe a\nCm cece OF te se eee EY\nme mee peek eee Ee\n\n|\na a Se a a Se ee ee a a Se Se Se Se |\n- ses J FS YY FF YK KH FB SB so pb 8\n- ses 2 tS YF FSF DF + H Es H FB\n-ses fF ct YE Hs DH HK Ese wo Oo\n- etc eee De KH Teewewve 8\nFe rercwrctcer rete ew Feces 8\neensar ic wt et \u00abse Sst =F Cc ee\nx, ne mm we HH WH HH RH WM mw oe Om\n-_ ee KH SF KH HR KH RM He we HH &\n+, s+ 5S te Y FSF tH MDH HR WH 4 KH HO FF\n- 2+ ZF oer FF + BW F&F 2 DBDs & CF *\u00ae\n- 2+ TF tT Ft \u00a3\u00a3 + DB F&F OO Ts & & *\n- c+ Tt et Et \u00a3\u00a3 + BLOTS & \u00a3\n\n\u2019\nJ\n\n)\n|\n!\nj\nj\nZ3 - age/gender\n\nme - skin colour\n\n- packground _\n\ncal\n\nce > KCE\nBe ep e @ eae\n\naT eat en ep ee\n\nZ\nBes tsb ie ft\n\nt\u00e9 mm ae +\na@adeod\nPU rerorts\nBassas oo a\n\nten wt ededel 46aececa\nlascenananas cee CC- GC G8 acrGCuc\npe SOeoePaccccc\n\n. ra\n; TiCrCCl, = a2 4\na @ r 6\u00b0 ae 08 4+ \u00ae ta oa g\u00b0@O@e8 v4\nSere nH eneoe a Sa@ Hat @8qQee &\u00ab\noo OD op DD S \"Be @ Sat ae eqgeeee\n\ntsb bh\n\u00a3\nr\n\nTatets?\n\nCocurErLCrc\nFigure 12: Latent traversal plots from 6-VAE that learnt disentangled representations on the CelebA\ndataset.\n\u00e9l] 2]:\n\nabel\n\nme Bobo h TE. Bok\nOe FH OOGee\nan Fa eeaca\na8 aereaaca\n\na ee ey SS ee af\na ok\nlg qeeve\ngg? Qgeeove\nAP? eaed*eoe\n\n\u2014~\n\nvYoavaT\n\nZ4~- azimuth\nFigure 13: Latent traversal plots from 6-VAE that learnt disentangled representations on the CelebA\ndataset.\nZ,7 saturation\n\nwv\nwn\n~\nWw\nco)\nWw\nWw\n&\nD\nc\n5\nWw\n!\nN\nN\nFigure 14: Latent traversal plots from 6-VAE that learnt disentangled representations on the Celeb/\ndataset."}]
SJg498clg
[{"section_index": "0", "section_name": "NEURAL GRAPH MACHINES:\nNETWORKS USING GRAPHS", "section_text": "Thang D. Bui\u2019\nSujith Ravi\nUniversity of Cambridge\ntdb40@cam.ac.uk\nUniversity of Cambridge\nGoogle Research\ntdb40@cam.ac.uk\nLabel propagation 1s a powerful and flexible semi-supervised learning technique\non graphs. Neural network architectures, on the other hand, have proven track\nrecords in many supervised learning tasks. In this work, we propose a training\nobjective for neural networks, Neural Graph Machines, for combining the power\nof neural networks and label propagation. The new objective allows the neural\nnetworks to harness both labeled and unlabeled data by: (a) allowing the network\nto train using labeled data as in the supervised setting, (b) biasing the network to\nlearn similar hidden representations for neighboring nodes on a graph, in the same\nvein as label propagation. Such architectures with the proposed objective can be\ntrained efficiently using stochastic gradient descent and scaled to large graphs.\nThe proposed method is experimentally validated on a wide range of tasks (multi-\nlabel classification on social graphs, news categorization and semantic intent clas-\nsification) using different architectures (NNs, CNNs, and LSTM RNNs)."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Semi-supervised learning is a powerful machine learning paradigm that can improve the prediction\nperformance compared to techniques that use only labeled data, by leveraging a large amount of\nunlabeled data. The need of semi-supervised learning arises in many problems in computer vision,\nnatural language processing or social networks, in which getting labeled datapoints is expensive or\nunlabeled data is abundant and readily available.\nThere exist a plethora of semi-supervised learning methods. The simplest one uses bootstrapping\ntechniques to generate pseudo-labels for unlabeled data generated from a system trained on labeled\ndata. However, this suffers from label error feedbacks (Lee} 2013). In a similar vein, autoencoder\nbased methods often need to rely on a two-stage approach: train an autoencoder using unlabeled\ndata to generate an embedding mapping, and use the learnt embeddings for prediction. In practice,\nthis procedure is often costly and inaccurate in practice. Another example is transductive SVMs\n[1999), which is too computationally expensive to be used for large datasets. Methods\nthat are based on generative models and amortized variational inference can\nwork well for images and videos, but it is not immediately clear on how to extend such techniques\nto handle sparse and multi-modal inputs or graphs over the inputs. In contrast to the methods above,\ngraph-based techniques such as label propagation often\nprovide a versatile, scalable, and yet effective solution to a wide range of problems. These methods\nconstruct a smooth graph over the unlabeled and labeled data. Graphs are also often a natural way\nto describe the relationships between nodes, such as similarities between embeddings, phrases or\nimages, or connections between entities on the web or relations in a social network. Edges in the\ngraph connect semantically similar nodes or datapoints, and if present, edge weights reflect how\nstrong such similarities are. By providing a set of labeled nodes, such techniques iteratively refine\nthe node labels by aggregating information from neighbours and propagate these labels to the nodes\u2019\nneighbours. In practice, these methods often converge quickly and can be scaled to large datasets\n\nwith a large label space (Ravi & Diao] (2016). We build upon the principle behind label propagation\nfor our method.\nivek Ramavajjala\nGoogle Research\nvramavaj@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Another key motivation of our work is the recent advances in neural networks and their performance\non a wide variety of supervised learning tasks such as image and speech recognition or sequence-to-\nare however conditioned on training very large networks on large datasets, which may need millions\nof labeled training input-output pairs. This begs the question: can we harness previous state-of-the-\nart semi-supervised learning techniques, to jointly train neural networks using limited labeled data\nand unlabeled data to improve its performance?\nContributions: We propose a discriminative training objective for neural networks with graph\naugmentation, that can be trained with gradient descent and efficiently scaled to large graphs. Ir\nparticular, we introduce a regularization term for generic neural network architectures that enforces\nsimilarity between nodes in the graphs. This is inspired by the objective function of label propa.\ngation. The resulting cost is amenable to stochastic training and can be applied to various mode\nclasses. We also investigate using graphs as direct inputs to train neural network classifiers and ex-\nperimentally demonstrate that this procedure is more efficient and accurate than previous two-stage\napproaches such as finding embeddings and using them for classification.\nThe closet approach to our work is the framework proposed by|Weston et al.|(2012), we extend theit\nwork in several ways: (a) our proposed training scheme is flexible, for example multiple graphs\nfrom multiple domains can be combined, (b) we provide extensive experiments on different types\nof neural networks and on properly constructed graphs (in contrast to nearest neighbor graphs in\n{Weston et al.| (2012), (c) we propose using graphs as inputs to the neural networks if there are no\ninput features. Our work is also different from recent works on using neural networks on graphs\n(e.g. see|Niepert et al] )). Instead, we advocate a training objective that uses graphs to augment\nneural network learning.\nIn this section, we will lay out the groundwork for our proposed training objective in section|3\nWe first provide a concise introduction to label propagation and its training objective. Suppose we\nare given a graph G = (V, E,W) where V is the set of nodes, E the set of nodes and W the edge\nweight matrix. Let V;, V,, be the labeled and unlabeled nodes in the graph. The goal is to predict <\nsoft assignment of labels for each node in the graph, Y, given the training label distribution for the\nseed nodes, Y. Mathematically, label propagation performs minimization of the following convex\nobjective function, for L labels,\nCoP) =m DO e- Yl Hue wae fF % -Uf,.\n\nvevi veEV,uEN (v)\n\n; + H3 Ss\n2 veVv\nsubject to an Yu. = 1, where N(v) is the neighbour node set of the node v, and U is the pric\ndistribution over all labels, w,,,, is the edge weight between nodes u and v, and p11, 2, and 3 ar\nhyperparameters that balance the contribution of individual terms in the objective. The terms in th\nobjective function above encourage that: (a) the label distribution of seed nodes should be clos\nto the ground truth, (b) the label distribution of neighbouring nodes should be similar, and, (c) :\nrelevant, the label distribution should stay close to our prior belief. This objective function can b\nsolved efficiently using iterative methods such as the Jacobi procedure. That is, in each step, eac\nnode aggregates the label distributions from its neighbours and adjusts its own distribution, whic\nis then repeated until convergence. In practice, the iterative updates can be done in parallel or in\ndistributed fashion which then allows large graphs with a large number of nodes and labels to b\n\ntrained efficiently. |Bengio et al. (2006) and|Ravi & Diao! (2016) are good surveys on the topic fc\n\ninterested readers.\nNeural networks are a class of non-linear mapping from inputs to outputs and comprised of multiple\nlayers that can potentially learn useful representations for predicting the outputs. We will view\nCun (9) = 7 \u20ac(Go(an), Yn):\n\nn\nIn this section, we devise a discriminative training objective for neural networks, that is inspired\nby the label propagation objective and uses both labeled and unlabeled data, and can be trained by\nstochastic gradient descent.\nFirst, we take a close look at the two objective functions discussed in section [2] The label propaga\ntion objective equation I] makes sure the predicted label distributions of neighbouring nodes to b\nsimilar, while those of labeled nodes to be close to the ground truth. For example: if a cat imag\nand a dog image are strongly connected in a graph, and if the cat node is labeled as animal, the pre\ndicted probability of the dog node being animal is also high. In contrast, the neural network trainin\nobjective equation2Jonly takes into account the labeled instances, and ensure correct predictions o\nthe training set. As a consequence, a neural network trained on the cat image alone will not mak\nan accurate prediction on the dog image.\nVi\n\nCrem (4) = Ss c(go(@n), Yn) + a1 Ss Wuvd(ho(ru), ho(xv))\n\nn=1 (uv)EELL\n\n+ ag Ss Wuvd(he(2u), he(av)) + a3 Ss Wuvd(he(ru), he(rv)\n\n(u,v)EELu (u,v)\u20ac\u20acuu\nwhere Er,, Ezy, and Evy are sets of labeled-labeled, labeled-unlabeled and unlabeled-unlabeled\nedges correspondingly, h(-) represents the hidden representations of the inputs produced by the\nneural network, and d(-) is a distance metric, and {a1, a2, a3} are hyperparameters. We call archi.\ntectures to be trained using this objective Neural Graph Machines, and schematically illustrate the\nconcept in figure[I] In practice, we choose an I-1 or /-2 distance metric for d(-), and h(x) to be the\nlast layer of the neural network. However, these choices can be changed, to a customized metric, 01\nto using an intermediate hidden layer instead."}, {"section_index": "3", "section_name": "3.1 CONNECTIONS TO PREVIOUS METHODS", "section_text": "Note that we have separated the terms based on the edge types, as these can affect the training\ndifferently. The graph-dependent a hyperparameters control the balance of these terms. When\na; = 0, the proposed objective ignores the similarity constraint and becomes a supervised-only\nobjective as in equation|2| When go (x) = ho(x) = g, where @ is the label distribution, the individual\ncost functions (c and d) are squared [-2 norm, and the objective is trained using \u00a5 directly instead of\n0, we arrive at the label propagation objective in equation|1] Therefore, the proposed objective could\nvarious models such as feedforward neural networks, recurrent neural networks and convolutional\nnetworks in the same umbrella. Given a set of N training input-output pairs {x,,,y,}_,, such\nneural networks are often trained by performing maximum likelihood learning, that is, tuning their\nparameters so that the networks\u2019 outputs are close to the ground truth under some criterion,\nwhere gg(-) denotes the overall mapping, parameterized by 0, and c(-) denotes a loss function such\nas l-2 for regression or cross entropy for classification. The cost function c and the mapping g\nare typically differentiable w.r.t 9, which facilitates optimisation via gradient descent. Importantly,\nthis can be scaled to a large number of training instances by employing stochastic training using\nminibatches of data. However, it is not clear how unlabeled data, if available, can be treated using\nthis objective, or if extra information about the training set, such as relational structures can be used.\nSuch shortcoming of neural network training can be rectified by biasing the network using prior\nknowledge about the relationship between instances in the dataset. In particular, for the domains\nwe are interested in, training instances (either labeled or unlabeled) that are connected in a graph,\nfor example, dog and cat in the above example, should have similar predictions. This can be done\nby encouraging neighboring data points to have a similar hidden representation learnt by a neural\nnetwork, resulting in a modified objective function for training neural network architectures using\nboth labeled and unlabeled datapoints:\no o o\na} lo lo \u201cOi\nI { I\nXi Xi2 Xik\nO o lo ;\noa} Io lo \u201cO- \u00a5i\nI | I\nXi Xj2 Xx\nFigure 1: Illustration of Neural Graph Machine: the training objective ensures the neural net to make\naccurate node-level predictions and biases the hidden representations of neighbouring nodes to be\nsimilar. [Left: feedforward NNs, Right: RNNs]\nbe thought of as a non-linear version of the label propagation objective, and a graph-regularized\nversion of the neural network training objective.\nSimilar to graph-based label propagation, the choice of the input graphs is critical, to correctly bias\nthe neural network\u2019s prediction. Depending on the type of the graphs and nodes on the graphs, they\ncan be readily available to use such as social networks or protein linking networks, or they can be\nconstructed (a) using generic graphs such as Knowledge Bases, that consists of links between ver-\ntices on the graph, (b) using embeddings learnt by an unsupervised learning technique, or, (c) using\nsparse feature representations for each vertex. Additionally, the proposed training objective can be\neasily modified for directed graphs.\nWe have discussed using node-level features as inputs to the neural network. In the absences of suct\ninputs, our training scheme can still be deployed using input features derived from the graph itself\nWe show in figure[2]and in the experiment that the neighbourhood information such as rows in the\nadjacency matrix are simple to construct, yet powerful inputs to the network. These features car\nalso be combined with existing features.\nFigure 2: Illustration of how we can construct inputs to the neural network using the adjacency\nmatrix.\nI-o\u2014-y\n\no o o\na} lo lo \u201cOi\nI { I\nXi Xi2 Xik\nO o lo ;\noa} Io lo \u201cO- \u00a5i\nI | I\nXi Xj2 Xx\nThe proposed objective function in equation |3|has several summations over the labeled points and\nedges, and can be equivalently written as follows,\nCrem (8) = QA Wuvd (I\noe, 1Wuvd(he (xu), he(ev)) + (Go(%u), Yu) + \u20ac(9o(tw), Yo\n\n+ Ss Q2WwA (ho (ru), he(zu)) +e\n(u,v)EELu \u00bb Cane)\n\n+ Ss Q3Wuvd(he(ru), he(rv)-\n\n(u,v)\u20acEuu\nThe objective in its new form enables stochastic training to be deployed. In particular, in each\ntraining iteration, we use a minibatch of edges and obtain the stochastic gradients of the objective.\nTo further reduce noise, we can select a labeled node and sample from the set of edges that are\nincident to that node. The number of edges per node to be sampled can be controlled."}, {"section_index": "4", "section_name": "3.4 COMPLEXITY", "section_text": "The complexity of each training epoch using equation|tlis O(M) where M = |\u20ac| is the number of\nedges in the graph. In practice, unlabeled-unlabeled edges do not seem to help learning and could\nbe ignored. which further reduces the above complexity.\nIn this section, we provide several experiments showing the efficacy of the proposed training objec-\ntive on a wide range of tasks, datasets and network architectures. All the experiments are done using\n\nTensorFlow (Abadi et al.\nWe first consider a multi-label classification on nodes of a graph. We use the BlogCatalog datase\n(Agarwal et al.| {2009}, which has 10,312 nodes and 333,983 edges, and there are 39 labels. Thi:\ngraph represent a network of social relationships given by bloggers and the labels are the bloggers\u2019\ninterests. We train a feedforward neural network with one hidden layer of 50 units and train eact\nclass as a one-vs-rest binary classification task. Since there are no features for each node, we us\u00a2\nthe rows of the adjacency matrix as inputs to the network, as discussed in section |3.2| Since we\nuse the test set to construct the graph and augment the training objective, the learning in this experi:\nment is transductive. Since the training set is extremely unbalanced, we employ weighted sampling\nduring training, i.e. making sure each minibatch has both positive and negative examples. In thi:\nexperiment, we fix a; to be equal, and experiment with a = 0 and 0.1 (0 means no edge informatior\nduring training); we use the /-2 metric to compute the distance between the hidden representations\nWe compare our method against a two-stage approach: use node2vec 2\nto generate node embeddings and use a linear one-vs-rest classifier for classification. The metho\nare evaluated using two metrics Macro F1 and Micro F1. The results for different train/test split:\nand different a values, together with the baseline are included in table|1| The results demonstrat\nthat 1. using the graph itself as direct inputs to the neural network and letting the network learning\na non-linear mapping is more effective than the two-stage approach considered, 2. using the grapl\ninformation improves the performance in the small data regime (for example: when training set i:\nonly 20% of the dataset). We observe the same improvement over Node2vec on the Micro F1 metric\nand a = 0.1 is comparable to a = 0 but performs better on the recall metric.\n(9) = $2 award (ho (tu), ho(av)) + \u20ac(go(tu); Yu) + e(go(av); Yo)\n(uv)eEre\n\n+ Ss 02Wuvd (ho (ru), he(zuv)) + e(go(ru), Yu)\n(uv) eELu\n\n+ Ss Q3Wuvd(he(ru), he(rv)-\n\nfap a\\EErry\n\u201cThese results are different compared to|Grover & Leskovec) (2016), since we treat the classifiers (one per\nlabel) independently. This setting is the same as for our NGM-NN classifiers.\nTable 1: Results for BlogCatalog dataset averaged over 10 random splits. The higher is better\n| | Macro F1\n| Train amount/a | 0 0.1 Node2ved\u2019\n0.2 0.180 0.191 0.168\n\n0.5 0.238 0.242 0.174\n0.8 0.263 0.262 0.177\nWe restrict the graph construction to only the train set and the unlabeled examples and keep the test\nset only for evaluation. We use the Google News word2vec corpus to calculate the average em-\nbedding for each news article and use the cosine similarity of document embeddings as a similarity\nmetric. Each node is restricted to 5 neighbors.\nTable 2: Settings of CNNs for the text classification experiment\nSetting | Baseline \u201csmall\u201d CNN | \u201cTiny\u201d CNN\n# of convolutional layers 6 3\nFrame size in conv. layers 256 32\n# of fully-connected layers 3 3\n\nHidden units in fully-connected layers 1024 256\n# of convolutional layers\nFrame size in conv. layers\n# of fully-connected layers\nHidden units in fully-connected layers\nFinally, we compare the performance of our approach for training RNN sequence models (LSTM)\nfor a semantic intent classification task as described in the recent work on SmartReply\n(2016) for automatically generating short email responses. One of the underlying tasks in SmartReply\nis to discover and map short response messages to semantic intent clusters[\"] We choose 20 intent\nclasses and created a dataset comprised of 5,483 samples (3,832 for training, 560 for validation and\n1,091 for testing). Each sample instance corresponds to a short response message text paired with\na semantic intent category that was manually verified by human annotators. For example, \u201cThat\n'For details regarding SmartReply and how the semantic intent clusters are generated, refer|Kannan et al\n\n(2016).\nWe evaluate the proposed objective function on a multi-class text classification task using a\ncharacter-level convolutional neural network (CNN). We use the AG news dataset from:\n(2015), where the task is to classify a news article into one of 4 categories. Each category has 30,000\nexamples for training and 1,900 examples for testing. In addition to the train and test sets, there are\n111,469 examples that are treated as unlabeled examples.\nWe construct the CNN in the same way as|Zhang et al.| (2015), but with significantly smaller layers,\nas shown in table[2]\nThe network is trained with the same parameters as |Zhang et al.|(2015) but only for 20 epochs. We\ncompare the final outputs using the cross entropy loss, that is d = cross_entropy(g(%u), 9(%w)).\n\nUsing the proposed objective function, the NGM-CNN provides a 1.8% absolute and 2.1% relative\nimprovement in accuracy, despite using a smaller network. We show the results in table[3]\nTable 3: Results for News Categorization using CNNs\nNetwork Accuracy % |\n\nBaseline: \u201csmall\u201d CNN 84.35\nBaseline: \u201csmall\u201d CNN with thesaurus augmentation 85.20\nBaseline: \u201ctiny\u201d CNN 85.07\n\n| |\n| \u201cTiny\u201d CNN with NGM 36.90\nsounds awesome!\u201d and \u201cSounds fabulous\u201d belong to the sounds good intent cluster. We construct a\nsparse graph in a similar manner as the news categorization task using word2vec embeddings over\nthe message text and computing similarity to generate a response message graph with fixed node\ndegree (k=10). We use /-2 for the distance metric d(-) and choose a@ based on the development set.\nWe run the experiments for a fixed number of time steps and pick the best results on the devel\nopment set. A multilayer LSTM architecture (2 layers, 100 dimensions) is used for the RN}\nsequence model. The LSTM model and its NGM variant are also compared against other baselin\nsystems\u2014Random baseline ranks the intent categories randomly and Frequency baseline rank\nthem in order of their frequency in the training corpus. To evaluate the intent prediction quality o\ndifferent approaches, for each test instance, we compute the rank of the actual intent category rank\nwith respect to the ranking produced by the method and use this to calculate the Mean Reciproca\nRank:\nTable 4: Results for Semantic Intent Classification using LSTM RNN:"}, {"section_index": "5", "section_name": "5 CONCLUSIONS", "section_text": "We have proposed a training objective for neural network architectures that can leverage both labele\nand unlabeled data. Inspired by the label propagation objective function, the proposed objective bi\nases the neural networks to learn similar hidden representations for nodes connected by an edg\non the graph. Importantly, this objective can be trained by stochastic gradient descent, as in super\nvised neural network training. We validate the efficacy of the graph-augmented objective on variou\nstate-of-the-art neural network architectures on bloggers\u2019 interest, text category and semantic in\ntent classification problems. Additionally, the node-level input features can be combine with grap!\nfeatures as inputs to the neural network. We showed that a neural network that simply takes th\nadjacency matrix of a graph and produces node labels, can perform better than a recently propose\ntwo-stage approach using sophisticated graph embeddings and a linear classifier.\nWhile our objective can be applied to multiple graphs which come from different domains, we have\nnot fully explored this aspect and leave this as future work. We expect the domain-specific networks\ncan interact with the graphs to determine the importance of each domain/graph source in prediction.\nAnother possible future work is to use our objective on directed graphs, that is to control the direction\nof influence between nodes during training.\nWe would like to thank the Google Expander team for insightful feedbacks."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S\nCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew\nHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunatt\nKudlur, Josh Levenberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah.\nN\nMRR = +> \u2014+\n\nN 1 rank;\n\ni=\nWe show in table|4]that LSTM RNNs with our proposed graph-augmented training objective func-\ntion outperform standard baselines by offering a better MRR.\nModel\n\nMean Reciprocal Rank (MRR)\n\nRandom 0.175\nFrequency 0.258\nLSTM 0.276\nNGM-LSTM 0.284\nAnjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos,\nGreg Corrado, Laszlo Lukacs, Marina Ganea, Peter Young, and Vivek Ramavajjala. Smart reply:\nAutomated response suggestion for email. In Proceedings of the ACM SIGKDD Conference on\nKnowledge Discovery and Data Mining (KDD)., 2016.\nMathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net\nworks for graphs. arXiv preprint arXiv: 1605.05273, 2016.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks\nIn Advances in Neural Information Processing Systems. pp. 3104\u20143112. 2014.\nJason Weston, Fr\u00e9d\u00e9ric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi-\nsupervised embedding. In Neural Networks: Tricks of the Trade, pp. 639-655. Springer, 2012.\nXiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propa-\ngation. Technical report, School of Computer Science, Canegie Mellon University.\nMike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-\ncent Vanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Martin Watten-\nberg, Martin Wicke, Yuan Yu, and Xiaogiang Zheng. TensorFlow: Large-scale machine learning\n\non heterogeneous systems, 2015. URL http: //tensorflow.org/| Software available from\ntensorflow ore.\nThorsten Joachims. Transductive inference for text classification using support vector machines. In\nInternational Conference on Machine T earnine 1999"}]
B16dGcqlx
[{"section_index": "0", "section_name": "THIRD-PERSON IMITATION LEARNING", "section_text": "Keintorcement learning UxL) makes it possible to train agents Capable oF achiev\ning sophisticated goals in complex and uncertain environments. A key difficulty i\nreinforcement learning is specifying a reward function for the agent to optimize\nTraditionally, imitation learning in RL has been used to overcome this problem\nUnfortunately, hitherto imitation learning methods tend to require that demonstra\ntions are supplied in the first-person: the agent is provided with a sequence o\nstates and a specification of the actions that it should have taken. While powerful\nthis kind of imitation learning is limited by the relatively hard problem of collect\ning first-person demonstrations. Humans address this problem by learning fror\nthird-person demonstrations: they observe other humans perform tasks, infer th\ntask, and accomplish the same task themselves.\n\nIn this paper, we present a method for unsupervised third-person imitation learn\ning. Here third-person refers to training an agent to correctly achieve a simpl\ngoal in a simple environment when it is provided a demonstration of a teache\ntask, and accomplish the same task themselves.\nIn this paper, we present a method for unsupervised third-person imitation learn-\ning. Here third-person refers to training an agent to correctly achieve a simple\ngoal in a simple environment when it is provided a demonstration of a teacher\nachieving the same goal but from a different viewpoint; and unsupervised refers\nto the fact that the agent receives only these third-person demonstrations, and is\nnot provided a correspondence between teacher states and student states. Our\nmethods primary insight is that recent advances from domain confusion can be\nutilized to yield domain agnostic features which are crucial during the training\nprocess. To validate our approach, we report successful experiments on learning\nfrom third-person demonstrations in a pointmass domain, a reacher domain, and\ninverted nendulum."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Reinforcement learning (RL) is a framework for training agents to maximize rewards in large, un-\nknown, stochastic environments. In recent years, combining techniques from deep learning with\nreinforcement learning has yielded a string of successful applications in game playing and robotics\nMnih et al. (2015} 2016); Schulman et al. (2015a); Levine et al.| (2016). These successful appli-\ncations, and the speed at which the abilities of RL algorithms have been increasing, makes it an\nexciting area of research with sicnificant potential for future applications.\nWhile IRL algorithms are appealing, they impose the somewhat unrealistic requirement that the\ndemonstrations should be provided from the first-person point of view with respect to the agent\nHuman beings learn to imitate entirely from third-person demonstrations \u2014 i.e., by observing othe\nhumans achieve goals. Indeed, in many situations, first-person demonstrations are outright impossi-\nble to obtain. Meanwhile, third-person demonstrations are often relatively easy to obtain."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "One of the major weaknesses of RL is the need to manually specify a reward function. For each\ntask we wish our agent to accomplish, we must provide it with a reward function whose maximizer\nwill precisely recover the desired behavior. This weakness is addressed by the field of Inverse\nReinforcement Learning (IRL). Given a set of expert trajectories, IRL algorithms produce a reward\nfunction under which these the expert trajectories enjoy the property of optimality. Recently, there\nhas been a significant amount of work on IRL, and current algorithms can infer a reward function\n\nfrom a very modest number of demonstrations (e.g,. |Abbeel & Ng on: Ratliff et al. 2006):\n\nZiebart et al.|(2008); [Levine et al.|(2011);[Ho & Ermon|(2016);|Finn et al 3016).\nThe goal of this paper is to develop an algorithm for third-person imitation learning. Future advance-\nments in this class of algorithms would significantly improve the state of robotics, because it will\nenable people to easily teach robots news skills and abilities. Importantly, we want our algorithm\nto be unsupervised: it should be able to observe another agent perform a task, infer that there is an\nunderlying correspondence to itself, and find a way to accomplish the same task.\nWe offer an approach to this problem by borrowing ideas from domain confusion{Tzeng et al.|(2014)\nand generative adversarial networks (GANs) {Goodfellow et al.|(2014). The high-level idea is to in-\ntroduce an optimizer under which we can recover both a domain-agnostic representation of the\nagent\u2019s observations, and a cost function which utilizes this domain-agnostic representation to cap-\nture the essence of expert trajectories. We formulate this as a third-person RL-GAN problem, and\n\nour solution builds on the first-person RL-GAN formulation by/Ho & Ermon|(2016).\nSurprisingly, we find that this simple approach has been able to solve the problems that are pre:\nsented in this paper (illustrated in Figure {Ip. even though the student\u2019s observations are related in <\ncomplicated way to the teacher\u2019s demonstrations (given that the observations and the demonstration:\nare pixel-level). As techniques for training GANs become more stable and capable, we expect out\nalgorithm to be able to infer solve harder third-person imitation tasks without any direct supervision"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Imitation learning (also learning from demonstrations or programming by demonstration) consider\nthe problem of acquiring skills from observing demonstrations. Imitation learning has a long history\nwith several good survey articles, including (Schaal} T999| (Calinon] [2009}/Argall et al-|[2009). Tw:\nmain lines of work within imitation learning are: 1) behavioral cloning, where the demonstration\nare used to directly learn a mapping from observations to actions using supervised learning, po\ntentially with interleaving learning and data collection (e.g.,|Pomerleau| (1989); Ross et al. (201 Ip\n2) Inverse reinforcement learning (Ng et al.||2000), where a reward function is estimated that ex\nplains the demonstrations as (near) optimal behavior. This reward function could be represente\nas nearness to a trajectory (Calinon et al.|/2007;|Abbeel et al.|/2010), as a weighted combination o\nfeatures (Abbeel & Ng\\|2004][Ratliff et al.|[2006}[Ramachandran & Amir] [2007}|Ziebart et al.|/2008\nTearning (Ratliff et al.| 2007; 2011) 2015} [Finn et al, 2016} |Ho 4\nErmon .\nFigure 1: From left to right, the three domains we consider in this paper: pointmass, reacher, and\npendulum. Top-row is the third-person view of a teacher demonstration. Bottom row is the agent\u2019s\nview in their version of the environment. For the point and reacher environments, the camera angles\ndiffer by approximately 40 degrees. For the pendulum environment, the color of the pole differs.\nThis past work, however, is not directly applicable to the third person imitation learning setting\nIn third-person imitation learning, the observations and actions obtained from the demonstration:\nare not the same as what the imitator agent will be faced with. A typical scenario would be: the\nimitator agent watches a human perform a demonstration, and then has to execute that same task\nAs discussed in [Nehaniv & Dautenhahn| (2001) the *what and how to imitate\u201d questions become\nsignificantly more challenging in this setting. To directly apply existing behavioral cloning or invers\u00ab\nreinforcement learning techniques would require knowledge of a mapping between observations anc\nactions in the demonstrator space to observations and actions in the imitator space. Such a mapping\nis often difficult to obtain, and it typically relies on providing feature representations that captures\nthe invariance between both environments 3 alinon et al\n(2007) ; ; . Contrary to prior work, we\nconsider third-person imitation learning from raw sensory data, where no such features are made\navailable.\nOur work also closely builds on advances in generative adversarial networks {Goodfellow et al\n(2014), which are very closely related to imitation learning as explained nin 0019 Ho\nErmon|(2016). In our optimization formulation, we apply the gradient flipping technique from|Ganir\n(2014).\nThe problem of adapting what i is learned i in one domain to another domain has been studied exten\n\nKulis et al|(2011 b; (Aytar & Zisserman [Aytar & Zisserman| Zisserman] (201 1|\n\nWang} (2015). It has also been shown ha um er a one domain can at be relevant tc\nother domains[Donahue et al.| (2014). The work most closely related to ours is/Tzeng et al.](2014\n2015), who also consider an explicit domain confusion loss, forcing trained classifiers to rely or\neatures that don\u2019t allow to distinguish between two domains. This work in turn relates to earlie\n\nwork by|Bromley et al. (1993); Chopra et al.] (2005), which also considers supervised training o:\n\ndeep feature embeddings.\nOur approach to third-person imitation learning relies on reinforcement learning from raw sensory\ndata in the imitator domain. Several recent advances in deep reinforcement learning have made this\npractical, including Deep Q-Networks (Mnih et al.||2015), Trust Region Policy Optimization (Schul-\n\nerent etal.\n\n2015a), A3C (2016), and Generalized Advantage Estimation (\n2015b). Our approach uses Trust Region Policy Optimization.\nA discrete-time finite-horizon discounted Markov decision process (MDP) is represented by a tuple\nM = (S,A,P,1r, p0,7,T), in which S is a state set, A an action set, P: Sx AxS > Rya\ntransition probability distribution, r : S x A \u2014 Ra reward function, po : S \u2014 R4 an initial state\ndistribution, 7 \u20ac [0,1] a discount factor, and T the horizon.\nIn the (first-person) imitation learning setting, we are not given the reward function. Instead we\nare given traces (i.e., sequences of states traversed) by an expert who acts according to an unknown\npolicy 7. The goal is to find a policy 7 that performs as well as the expert against the unknown\nreward function. It was shown in[Abbeel & Ng| that this can be achieved through inverse\nreinforcement learning by finding a policy 7 that matches the expert\u2019s empirical expectation over\ndiscounted sum of all features that might contribute to the reward function. The work by [Ho &|\n(2016) generalizes this to the setting when no features are provided as follows: Find a policy\nmg that makes it impossible for a discriminator (in their work a deep neural net) to distinguish states\nvisited by the expert from states visited by the imitator agent. This can be formalized as follows:\nThe most closely related work to ours is by (2016); ( );|Wulfmeier et al.\ndata. However,\n\n(2015), who also consider inverse reinforcement learning directly from raw sensor\nthe applicability of their approaches is limited to the first-person setting. Indeed, matching raw\nsensory observations is impossible in the 3rd person setting.\nIn the reinforcement learning setting, the goal is to find a policy 7 : S x A \u2014 R, parametrized\n\nby 0 that maximizes the expected discounted sum of rewards incurred, (779) = Ex, [D>/-9 7'c(st)].\nwhere S89 ~ po(So), a\u00a2 ~ 79(az|s;), and s\u00a2i1 ~ P(s\u00a214|5;. a4).\nmaxmin \u2014E,,,[log Dr(s)] \u2014 Exe [log(1 \u2014 Dr(s))]\nHere, the expectations are over the states experienced by the policy of the imitator agent, 77, and by\nthe policy of the expert, 7, respectively. Dp is the discriminator, which outputs the probability of\na state having originated from a trace from the imitator policy 79. If the discriminator is perfectly\nable to distinguish which policy originated state-action pairs, then Dr will consistently output a\nprobability of | in the first term, and a probability of 0 in the second term, making the objective\nits lowest possible value of zero. It is the role of the imitator agent 7 to find a policy that makes\nit difficult for the discriminator to make that distinction. The desired equilibrium has the imitator\nagent making it impractical for the discriminator to distinguish, hence forcing the discriminator to\nassign probability 0.5 in all cases. present a practical approach for solving\nthis type of game when representing both 7\u00bb and Dr as deep neural networks. Their approach\nrepeatedly performs gradient updates on each of them. Concretely, for a current policy 79 traces can\nbe collected, which together with the expert traces form a data-set on which Dp can be trained with\nsupervised learning minimizing the negative log-likelihood (in practice only performing a modest\nnumber of updates). For a fixed Dp, this is a policy optimization problem where \u2014 log Dr(s, a)\nis the reward, and policy gradients can be computed from those same traces. Their approach uses\n\ntrust region policy optimization (Schulman et al.\\{2015a) to update the imitator policy 79 from those\n\ngradients.\nIn our work we will have more terms in the objective, so for compactness of notation, we will realize\nthe discriminative minimization from Ean. (1) as follows:\nE inLe= CE(Dr(si), ce;\nmax min Lp > (Dr(si), ce; )\n\n\u2122] D\nWhere s; is state 7, ce, is the correct class label (was the state s; obtained from an expert vs. from ;\nnon-expert), and C'E is the standard cross entropy loss.\nFormally, the third-person imitation learning problem can be stated as follows. Suppose we are given\ntwo Markov Decision Processes M,,, and M,,. Suppose further there exists a set of traces p =\n{(51,..-, Sn) }%9 which were generated under a policy 7 acting optimally under some unknown\nreward R,,,.. In third-person imitation learning, one attempts to recover by proxy through p a policy\na9 = f(p) which acts optimally with respect to R,,.\nIn this section, we discuss a simple algorithm for third-person imitation learning. This algorithm\nis able to successfully discriminate between expert and novice policies, even when the policies are\nexecuted under different environments. Subsequently, this discrimination signal can be used to train\nexpert policies in new domains via RL by training the novice policy to fool the discriminator, thus\nforcing it to match the expert policy.\nTo handle the third-person setting, where expert and novice are in different environments, we con-\nsider that Dr works by first extracting features from o;, and then using these features to make a\nIn third-person learning, observations are more typically available rather than direct state access,\nso going forward we will work with observations 0; instead of states s; as representing the expert\ntraces. The top row of Figure|8]illustrates what these observations are like in our experiments.\nWe begin by recalling that in the algorithm proposed by (2016) the loss in Equation [2\n\nis utilized to train a discriminator Dp capable of distinguishing expert vs non-expert policies. Un-\nfortunately, (2) will likely fail in cases when the expert and non-expert act in different environments,\nsince Dp will quickly learn these differences and use them as a strong classification signal.\nclassification. Suppose then that we partition Dr into a feature extractor Dp and the actual clas-\nsifier which assigns probabilities to the outputs of Dr. Overloading notation, we will refer to the\nclassifier as Dp going forward. For example, in case of a deep neural net representation, D- would\ncorrespond to the earlier layers, and Dr to the later layers. The problem is then to ensure that Dr\ncontains no information regarding the rollout\u2019s domain label dg (i.e., expert vs. novice domain).\nThis can be realized as\nnaximin Lp = > CE(Dr(Dr(oi)), ce,)\n\ns.t. MI(Dp(0;);d;) = 0\nThe mutual information term can be instantiated by introducing another classifier Dp, which take:\nfeatures produced by Dp and outputs the probability that those features were produced by in the\nexpert vs. non-expert environment. (See{Bridle et al.|(1992);/Barber & Agakov|(2005);|Krause et al\n(2010);|Chen et al.|(2016) for further discussion on instantiating the information term by introducing\n\nanother classifier.) If o; = D(o0;), then the problem can be written as\nmax min max Lp +Lp= > CE(Dr(ai), ce;) + CE(Dp(0%), de,\nIn words, we wish to minimize class loss while maximizing domain confusion.\nOften, it can be difficult for even humans to judge a static image as expert vs. non-expert because it\ndoes not convey any information about the environmental change affected by the agent\u2019s actions. Fot\nexample, if a pointmass is attempting to move to a target location and starts far away from its goal\nstate, it can be difficult to judge if the policy itself is bad or the initialization was simply unlucky. In\nresponse to this difficulty, we give Dp access to not only the image at time t, but also at some future\ntime t + n. Define o, = Dp(o,) and o\u00a24n = Dp(or4n). The classifier then makes a prediction\nDror, Oren) = &.\nThis renders the following formulation:\nmaxminmaxlpy+ Lp = Ss CE(Dr(0i, Fitn), Ce;,) + CE(Dp (a), de; )\n\ntT Dr Dp\nNote we also want to optimize over Dr, the feature extractor, but it feeds both into Dr and into Dp\nwhich are competing (hidden under @), which we will address now.\nTo deal with the competition over Dr, we introduce a function G that acts as the identity when\nmoving forward through a directed acyclic graph and flips the sign when backpropagating through\n\nthe graph. This technique has enjoyed recent success in computer vision. See, for example,\n& Lempitsky||2014). With this trick, the problem reduces to its final form\nIn Equation we flip the gradient\u2019s sign during backpropagation of D with respect to the domain\nclassification loss. This corresponds to stochastic gradient ascent away from features that are useful\nfor domain classification, thus ensuring that D produces domain agnostic features. Equation|5]can\nbe solved efficiently with stochastic gradient descent. Here is a hyperparameter that determines\nthe trade-off made between the objectives that are competing over Dp.\nTo ensure sufficient signal for discrimination between expert and non-expert, we collect third-persot\ndemonstrations in the expert domain from both an expert and from a non-expert.\nOur complete formulation is graphically summarized in Figure|2]\nWhere MI is mutual information and hence we have abused notation by using Dr, Dp, and dy to\nmean the classifier, feature extractor, and the domain label respectively as well as distributions over\nthese objects.\nmax, min, Lat Lp =} 1 CE(Da(oi, itn), \u00a2t,) + \\CE(Do(G(ai), de)\nFigure 2: Architecture diagram for third-person imitation learning. Images at time t and t + 4 are\nsent through a feature extractor to obtain F'(o,) and F'(o,44). Subsequently, these feature vectors\nare reused in two places. First, they are concatenated and used to predict whether the samples are\ndrawn from expert or non-expert trajectories. Second, F'(0;) is utilized to predict a domain label\n(expert vs. novice domain). During backpropogation, the sign on the domain loss Lp is flipped\nto destroy information that was useful for distinguishing the two domains. This ensures that the\nfeature extractor F' is domain agnostic. Finally, the classes probabilities that were computed using\nthis domain-agnostic feature vector are utilized as a cost signal in TRPO; which is subsequently\nutilized to train the novice policy to take expert-like actions and collect further rollouts."}, {"section_index": "4", "section_name": "5.2 ALGORITHM", "section_text": "To solve the game formulation in Equation 6), we perform alternating (partial) optimization over\nthe policy 7 and the reward function and domain confusion encoded through Dr, Dp, Dr.\nThe optimization over Dr,Dp,Dr is done through stochastic gradient descent with\n\nADAM|Kingma & Ba|(2014).\nOur generator (79) step is similar to the generator step in the algorithm by (2016). We\nsimply use \u2014 log Dp as the reward. Using policy gradient methods (TRPO), we train the generator\nto minimize this cost and thus push the policy further towards replicating expert behavior. Once the\ngenerator step is done, we start again with the discriminator step. The entire process is summarized\nin algorithm 1.\nWe seek to answer the following questions through experiment:\n. Is it possible to solve the third-person imitation learning problem in simple settings? Le..,\ngiven a collection of expert image-based rollouts in one domain, is it possible to train a\npolicy in a different domain that replicates the essence of the original behavior?\n\n. Does the algorithm we propose benefit from both domain confusion and velocity?\n\n. How sensitive is our proposed algorithm to the selection of hyper-parameters used in de-\nployment?\n\n. How sensitive is our proposed algorithm to changes in camera angle?\n\n. How does our method compare against some reasonable baselines?\nAlgorithm 1 A third-person imitation learning algorithm.\nPoint: A pointmass attempts to reach a point ina plane. The color of the target and the camera angl\nchange between domains.\nReacher: A two DOF arm attempts to reach a designated point in the plane. The camera angle\nthe length of the arms, and the color of the target point are changed between domains. Note tha\nchanging the camera angle significantly alters the image background color from largely gray t\nroughly 30 percent black. This presents a significant challenge for our method.\nInverted Pendulum: A classic RL task wherein a pendulum must be made to balance via control\nFor this domain, We only change the color of the pendulum and not the camera angle. Since ther\nis no target point, we found that changing the camera angle left the domain invariant representation:\nwith too little information and resulted in a failure case. In contrast to some traditional rendering:\nTo evaluate our algorithm, we consider three environments in the MuJoCo physics simulator. There\nare two different versions of each environment, an expert variant and a novice variant. Our goal\nis to train a cost function that is domain agnostic, and hence can be trained with images on the\nexpert domain but nevertheless produce a reasonable cost on the novice domain. See Figure 1 for a\nvisualization of the differences between expert and novice environments for the three tasks.\nof this problem, we do not terminate an episode when the agent falls but rather allow data collectior\nto continue for a fixed horizon.\nmean reward\n\nReacher Reward vs Iteration\n\nInverted Pendulum Reward vs Iteration\n\nPoint Reward vs Iteration\n\nIteration\u201d\nmean reward\n\nReacher Reward vs Iteration\n\nInverted Pendulum Reward vs Iteration\n\ned\n\nPoint Reward vs Iteration\n\neration cE 2\nFigure 3: Reward vs training iteration for reacher, inverted pendulum, and point environments. The\nlearning curves are averaged over 5 trials with error bars represent one standard deviation in the\nreward distribution at the given point.\ndomain classifcaiton accuracy\n\nReacher domain class acc vs iteration\n\niteration\n\ndomain classifcaiton accuracy\n\nPendulum domain class acc vs iteration\n\niteration\n\ndomain classifcaiton accuracy\n\nPoint domain class acc vs iteration\n\niteration\ndomain classifcaiton accuracy\n\ndomain classifcaiton accuracy\ndomain classifcaiton accuracy\n\nReacher domain class acc vs iteration Pendulum domain class acc vs iteration Point domain class acc vs iteration.\n\niteration iteration iteration\nFigure 4: Domain accuracy vs. training iteration for reacher, inverted pendulum, and point environ-\nments.\nta, = Datta i ieee a aaa aaa eras a iia\nWe answer this question with the experiments summarized in Figure|5| This experiment compare:\nour approach with: (i) our approach without the domain confusion loss; (ii) our approach without th\nmulti-time step input; (iii) our approach without the domain confusion loss and without the multi\ntime step input (which is very similar to the approach in{Ho & Ermon|( . We see that addin;\ndomain confusion is essential for getting strong performance in all three experiments. Meanwhile\nadding multi-time step input marginally improves the results. See also Figure [7] for an analysis o\nthe effects of multi-time step input on the final results.\nReward\n\nvelo and domain confusion reacher\n\nHteration\n\nis\n\noie\nSom_pks_velo\n\nvelo and domain confusion inverted pendulum,\n\neration\n\nvelo and domain confusion point\n\nNefation\n\nis\nReward\n\nvelo and domain confusion reacher\n\neration\n\n=dom_pl_ velo\n\nvelo and domain confusion inverted pendulum\n\nReward\n\nReward\n\nvelo and domain confusion point\n\n=dom_pl_ velo\n\nEan AS\n\neration\nFigure 5: Reward vs iteration for reacher, inverted pendulum, and point environments with no do.\nmain confusion and no velocity (red), domain confusion (orange), velocity (brown), and both do.\nmain confusion and velocity (blue).\nIs it possible to solve the third-person imitation learning problem in simple settings? In Figure|3}\nwe see that our proposed algorithm is indeed able to recover reasonable policies for all three tasks we\nexamined. Initially, the training is quite unstable due to the domain confusion wreaking havoc on the\nlearned cost. However, after several iterations the policies eventually head towards reasonable local\nminima and the standard deviation over the reward distribution shrinks substantially. Finally, we\nnote that the extracted feature representations used to complete this task are in fact domain-agnostic,\nas seen in Figure|9] Hence, the learning is properly taking place from a third-person perspective."}, {"section_index": "5", "section_name": "Does the algorithm we propose benefit from both domain confusion and the multi-time step input?", "section_text": "How sensitive is our proposed algorithm to the selection of hyper-parameters used in deployment?\nFigure [6] shows the effect of the domain confusion coefficient 4, which trades off how much we\nshould weight the domain confusion objective vs. the standard cost-recovery objective, on the final\nperformance of the algorithm. Setting \\ too low results in slower learning and features that are not\ndomain-invariant. Setting too high results in an objective that is too quick to destroy information,\nwhich makes it impossible to recover an accurate cost.\nFor multi-time step input, one must choose the number of look-ahead frames that are utilized. I\ntoo small a window is chosen, the agent\u2019s actions have not affected a large amount of change ir\nthe environment and it is difficult to discern any additional class signal over static images. If toc\nlarge a time-frame passes, causality becomes difficult to interpolate and the agent does worse thar\nsimply being trained on static frames. Figure|7]illustrates that no number of look-ahead frames is\nconsistently optimal across tasks. However, a value of 4 showed good performance over all tasks\nand so this value was utilized in all other experiments.\nReward\n\nReacher Reward vs dom confusion coefficient\n\nDomain Confusion Coefficient\n\nReward\n\nPendulum Reward vs dom confusion coefficient\n\n\u2018Domain Confusion Coefficient\n\nReward\n\nPoint Reward vs dom confusion coefficient\n\n\u2018Domain Confusion Coefficient | .\nFigure 6: Reward of final trained policy vs domain confusion weight for reacher, inverted pendu:\nlum, and point environments.\ncos Reacher Reward vs look-ahead frames Inverted Pendulum Reward vs look-ahead frames Point Reward vs look-ahead frames\n\nReward\n\nLook-ehead frames\n\nReward\n\nLook-ahead frames\n\nReward\n\nLook-ahead frames.\n<7 Reacher Reward vs look-ahead frames\n\nReward\n\nLook-ahead frames\n\nReward\n\nInverted Pendulum Reward vs look-ahead frames\n\nLook-ahead frames \u201c\n\nReward\n\nPoint Reward vs look-ahead frames\n\nLook-ahead frames\nFigure 7: Reward of final trained policy vs number of look-ahead frames for reacher, inverted pen\ndulum, and point environments.\nHow sensitive is our algorithm to changes in camera angle? We present graphs for the reacher\nand point experiments wherein we exam the final reward obtained by a policy trained with third-\nperson imitation learning vs the camera angle difference between the first-person and third-person\nperspective. We omit the inverted double pendulum experiment, as the color and not the camera\nangle changes in that setting and we found the case of slowly transitioning the color to be the\ndefinition of uninteresting science.\nReward\n\nReacher Reward vs dom confusion coefficient\n\nBomain Confusion Coefficient\n\nReward\n\nPendulum Reward vs dom confusion coefficient\n\n\u201c \u2018Domain Confusion Coefficient\u201d\n\nReward\n\nPoint Reward vs dom confusion coefficient\n\n\u2018Bomain Confusion Coefficient .\nReward\n\nReward\n\n0\n\n-3000 -1000\n\n-5000\n\n10\n\n12\n\nPoint Experiment Third-Person vs. Baselines\n\nO---0-\no--0\n\nwort O77 On OO\n\nwer O72 02 0-+ O--- 0+ o---0\n-0--~9--0--0- -0--\u00b0>~6 228\n\na __-o] \u2014 firston thir\n~~ _ \u00b0 \u00b0 -- first-person\no\u2014\u00b0~_,--0\u2014 0 \u2014 9 \\ aN Ze Pi\n\nul\n-=> third-persor\n\nIteration\n\nReacher Experiment Third-Person vs. Baselines\n\n\u2014\u2014 first on third\n- 7 first-person\ndl\n\nthird-person\n\nIteration\nReward\n-1000 0\n\n-3000\n\n-5000\n\n9218 O77 OF Ore OF\n0- -0--\u00b0> ~9- -0--0-\n\nO--- Oren gea2 O77 Or+= 777 OB Qe -\nLB oo -o--9--0--\u00b0 0~-~9--0-\n\nO~Wo-\n\no..\ntoe\n\n\u2014 first on thirc\nom \u2014 = first-person\n\noot ONY seo\n\nthird-persor\nFigure 9: Learning curves for third-person imitation vs. three baselines: 1)RL with true reward, 2)\nfirst-person imitation, 3) attempting to use first-person features on the third-person agent.\nHow does our method compare against reasonable baselines? We consider the following base-\nlines for comparisons against third-person imitation learning. 1) Standard reinforcement learning\nwith using full state information and the true reward signal. This agent is trained via TRPO. 2)\nPoint Camera Angle vs Reward\n\n10 20 30\nDifference in Camera Anoale (decrees)\n\nReward\n\nReacher Camera Angle vs Reward\n\n5 10 15\nDifference in Camera Angle (deqarees)\nFigure 8: Point and reacher final reward after 20 epochs of third-person imitation learning vs the\ncamera angle difference between the first and third-person perspective. We see that the point follows\na fairly linear slope in regards to camera angle differences, whereas the reacher environment is more\nstochastic against these changes.\nReward\n\n10\n\n-12\n\n\u2014\u2014 first on third\n\u2014 first-person\ndl\n\n--- third-person\nWe compare all three of these baselines to third-person imitation learning. As we see in figure\n9: 1) Standard RL, which (unlike the imitation learning approaches) has access to full state anc\ntrue reward, helps calibrate performance of the other approaches. 2) First-person imitation learning\nis faced with a simpler imitation problem and accordingly outperforms third-person imitation, ye\nthird-person imitation learning is nevertheless competitive. 3) Applying the first-person policy tc\nthe third-person agent fails miserably, illustrating that explicitly considering third-person imitatior\nis important in these settings."}, {"section_index": "6", "section_name": "] DISCUSSION AND FUTURE WORK", "section_text": "In this paper, we presented the problem of third-person imitation learning. We argue that this prot\nlem will be important going forward, as techniques in reinforcement learning and generative adver\nsarial learning improve and the cost of collecting first-person samples remains high. We presente\nan algorithm which builds on Generative Adversarial Imitation Learning and is capable of solvin\nsimple third-person imitation tasks.\nOne promising direction of future work in this area is to jointly train policy features and cost feature\nat the pixel level, allowing the reuse of image features. Code to train a third person imitation learnin;\n\nagent on the domains from this paper is presented here: https://github.com/bstadie,"}, {"section_index": "7", "section_name": "ACKNOWLEDGEMENTS", "section_text": "This work was done partially at OpenAI and partially at Berkeley. Work done at Berkeley was\nsupported in part by Darpa under the Simplex program and the FunLoL program.\nD. Barber and F. V. Agakov. Kernelized infomax clustering. NJPS, 2005.\nStandard GAIL (first-person imitation learning). Here, the agent receives first-person demonstration\nand attempts to imitate the correct behavior. This is an upper bound on how well we can expect to\ndo, since we have the correct perspective. 3) Training a policy using first-person data and applying\nit to the third-person environment.\nSomewhat unfortunately, the different reward function scales make it difficult to capture information\non the variance of each learning curve. Consequently, in Appendix A we have included the full\nlearning curves for these experiments with variance bars, each plotted with an appropriate scale to\nexamine the variance of the individual curves.\nBrenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning\nfrom demonstration. Robotics and autonomous systems, 57(5):469-483, 2009.\nYusuf Aytar and Andrew Zisserman. Tabula rasa: Model transfer for object category detection. In\n2011 International Conference on Computer Vision. pp. 2252\u20142259. IEEE. 2011.\nMalinda Carpenter, Josep Call, and Michael Tomasello. Understanding prior intentions enables\ntwo-year-olds to imitatively learn a complex task. Child development. 73(5):1431\u20141441. 2002.\n[an Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair\nAaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor.\nmation Processing Systems, pp. 2672-2680, 2014.\nJudy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, and Kate Saenko. Efficient learning of\ndomain-invariant image representations. arXiv preprint arXiv: 1301.3224, 2013.\nC. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control via policy\noptimization. ICML, 2016.\n\nY. Ganin and V. Lempitsky. Unsupervised domain adaptation by backpropagation. Arxiv preprint\n1409.7495, 2014.\n\nG Gioioso, G Salvietti, M Malvezzi, and D Prattichizzo. An object-based approach to map human\nhand synergies onto robotic hands with dissimilar kinematics. Robotics: Science and Systems\nVIII, pp. 97, 2013.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings\not the 3rd International Conference on T.earnine Renresentations (ICTR). 9014.\nYishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bound:\nand algorithms. arXiv preprint arXiv:0902.3430, 2009.\nDean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances in\nNeural Information Processing Systems, pp. 305\u2014313. 1989.\nSt\u00e9phane Ross, Geoffrey J Gordon, and Drew Bagnell. A reduction of imitation learning and struc\ntured prediction to no-regret online learning. In AJSTATS, volume 1, pp. 6, 2011.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust regio:\npolicy optimization. Arxiv preprint 1502.05477, 2015a.\nEric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Xingchao Peng, Pieter Abbeel, Sergey\nLevine, Kate Saenko, and Trevor Darrell. Towards adapting deep visuomotor representations\nfrom simulated to real environments. arXiv preprint arXiv:1511.07111, 2015.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle-\nmare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level\ncontrol through deep reinforcement learning. Nature, 518(7540):529-533, 2015.\nEric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion:\nMaximizing for domain invariance. arXiv preprint arXiv: 1412.3474, 2014.\nJun Yang, Rong Yan, and Alexander G Hauptmann. Cross-domain video concept detection usin\nadaptive svms. In Proceedings of the 15th ACM international conference on Multimedia, pj\n188-197. ACM, 2007.\nFigure 10: Inverted Pendulum performance under a policy trained on RL, first-person imitatiot\nlearning, third-person imitation, and a first-person policy applied to a third-person agent.\nHere, we plot the learning curves for each of the baselines mentioned in the experiments section as\na standalone plot. This allows one to better examine the variance of each individual learning curve.\nmean reward\n\n20.0\n\nmean reward\n\nInverted DP RL Reward vs Iteration\n\n25 5.0 75 10.0\niteration\n\nDP First Person Policy on Third Person Agent\n\nInverted DP First Person Imitation Reward vs Iteratior\n\n50\n\nmean reward\n\niteration\n\n10.0\n\nInverted DP Third Person Imitation Reward vs Iteratio\n\n40\n\n25 5.\n\n5.0 75 10.0\niteration\n\nmean reward\n\n25 5.0\niteration\n\n10.0\nInverted DP First Person Imitation Reward vs Iteration\n10.0\n\n\u00a9\n\n25\n\nWeMal UBL\n\n50\np40\n10\nnverted DP Third Person Imitation Reward vs Iteratior\nFigure 11: Reacher performance under a policy trained on RL, first-person imitation learning, third-\nperson imitation, and a first-person policy applied to a third-person agent.\nFigure 12: Point performance under a policy trained on RL, first-person imitation learning, third-\nperson imitation, and a first-person policy applied to a third-person agent.\nmean reward\n\nmean reward\n\n8.\n\nReacher RL Reward vs Iteration\n\nReacher First Person Imitation Reward vs Iteration\n\n10.45 20 25\niteration\n\nReacher First Person Policy on Third Person Agent\n\n0 5 10. 15 20 25\niteration\n\nmean reward\n\nmean reward\n\niE\n\n10 15 20 25\niteration\n\nReacher Third Person Imitation Reward vs Iteration\n\n5 10\n\n) 15 20 25\niteration\nEs\n\npuema uel\n\n==\n\npuemas ues\n\n25\n\n20\n\n15\n\n10\n\nc\n\n25\n\n20\n\n15\n\nHtoration\n\nlaratinn\nmean reward\n\nmean reward\n\n-4000\n\n-5000:\n\n-6000:\n\nPoint RL Reward vs Iteration\n\nPoint First Person Policy on Third Person Agent\n\n40\n\n05\niteration\n\n5.0\niteration\n\n75\n\n10.0\n\n-600\n\n-800.\n\nmean reward\n\n-2000-\n\nmean reward\n\nPoint First Person Imitation Reward vs Iteration\n\n10.15 20 25\niteration\n\nPoint Third Person Imitation Reward vs Iteration\n\n5 20 25\n\niteration\nJoint Feature Extractor: Input is images are size 50 x 50 with 3 channels, RGB. Layers are 2\nconvolutional layers each followed by a max pooling layer of size 2. Layers use 5 filters of size 3\neach.\nDomain Discriminator and the Class Discriminator: Input is domain agnostic output of con-\nvolutional layers. Layers are two feed forward layers of size 128 followed by a final feed forward\nlayer of size 2 and a soft-max layer to get the log probabilities.\nADAM is used for discriminator training with a learning rate of 0.001. The RL generator uses the\noff-the-shelf TRPO implementation available in RLLab."}]
HysBZSqlx
[{"section_index": "0", "section_name": "PLAYING SNES IN\nTHE RETRO LEARNING ENVIRONMENT", "section_text": "Nadav Bhonker*, Shai Rozenberg* and Itay Hubara\n{nadavbh, shairoz}@tx.technion.ac.il\nitayhubara@gmail.com\nMastering a video game requires skill, tactics and strategy. While these attributes\nmay be acquired naturally by human players, teaching them to a computer pro-\ngram is a far more challenging task. In recent years, extensive research was carried\nout in the field of reinforcement learning and numerous algorithms were intro-\nduced, aiming to learn how to perform human tasks such as playing video games.\nAs a result, the Arcade Learning Environment (ALE)\nbecome a commonly used benchmark environment allowing algorithms to train on\nvarious Atari 2600 games. In many games the state-of-the-art algorithms outper-\nform humans. In this paper we introduce a new learning environment, the Retre\nLearning Environment \u2014 RLE, that can run games on the Super Nintendo Enter-\ntainment System (SNES), Sega Genesis and several other gaming consoles. The\nenvironment is expandable, allowing for more video games and consoles to be\neasily added to the environment, while maintaining the same interface as ALE.\nMoreover, RLE is compatible with Python and Torch. SNES games pose a signif-\nicant challenge to current algorithms due to their higher level of complexity and\nversatility."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Controlling artificial agents using only raw high-dimensional input data such as image or sound is\n| difficult and important task in the field of Reinforcement Learning (RL). Recent breakthroughs in\nhe field allow its utilization in real-world applications such as autonomous driving (Shalev-Shwartz\nt al. [2016), navigation and more. Agent interaction with the real world is\nisually either expensive or not feasible, as the real world is far too complex for the agent to perceive.\nTherefore in practice the interaction is simulated by a virtual environment which receives feedback\nm a decision made by the algorithm. Traditionally, games were used as a RL environment, dating\nyack to Chess (Campbell et al.] |2002), Checkers (Schaeffer et al.| |[992), backgammon\n(995) and the more recent Go (Silver et al.||2016). Modern games often present problems and tasks\nvhich are highly correlated with real-world problems. For example, an agent that masters a racing\nrame, by observing a simulated driver\u2019s view screen as input, may be usefull for the development of\nin autonomous driver. For high-dimensional input, the leading benchmark is the Arcade Learning\ninvironment (ALE) (Bellemare et al.|/2013) which provides a common interface to dozens of Atari\n2600 games, each presents a different challenge. ALE provides an extensive benchmarking plat-\norm, allowing a controlled experiment setup for algorithm evaluation and comparison. The main\nchallenge posed by ALE is to successfully play as many Atari 2600 games as possible (i.e., achiev-\nng a score higher than an expert human player) without providing the algorithm any game-specific\nnformation (i.e., using the same input available to a human - the game screen and score). A key\nvork to tackle this problem is the Deep Q-Networks algorithm (Mnih et al.|{2015), which made a\nwreakthrough in the field of Deep Reinforcement Learning by achieving human level performance\nm 29 out of 49 games. In this work we present a new environment \u2014 the Retro Learning Environ-\nnent (RLE). RLE sets new challenges by providing a unified interface for Atari 2600 games as well\niS More advanced gaming consoles. As a start we focused on the Super Nintendo Entertainment\nSystem (SNES). Out of the five SNES games we tested using state-of-the-art algorithms, only one\nwas able to outperform an expert human player. As an additional feature, RLE supports research of\nmulti-agent reinforcement learning (MARL) tasks (Bugoniu et al.|{2010). We utilize this feature by\ntraining and evaluating the agents against each other, rather than against a pre-configured in-game\nAL. We conducted several experiments with this new feature and discovered that agents tend to learn\nhow to overcome their current opponent rather than generalize the game being played. However, if\nan agent is trained against an ensemble of different opponents, its robustness increases. The main\ncontributions of the paper are as follows:\nThe Arcade Learning Environment is a software framework designed for the development of RL\nalgorithms, by playing Atari 2600 games. The interface provided by ALE allows the algorithms to\nselect an action and receive the Atari screen and a reward in every step. The action is the equivalent\nto a human\u2019s joystick button combination and the reward is the difference between the scores at\ntime stamp t and t \u2014 1. The diversity of games for Atari provides a solid benchmark since different\ngames have significantly different goals. Atari 2600 has over 500 games, currently over 70 of them\nare implemented in ALE and are commonly used for algorithm comparison."}, {"section_index": "2", "section_name": "2.2 INFINITE MARIO", "section_text": "Infinite Mario 2 is aremake of the classic Super Mario game in which levels ar\nrandomly generated. On these levels the Mario AI Competition was held. During the competition\nseveral algorithms were trained on Infinite Mario and their performances were measured in terms 0\nthe number of stages completed. As opposed to ALE, training is not based on the raw screen dat:\nbut rather on an indication of Mario\u2019s (the player\u2019s) location and objects in its surrounding. Thi:\nenvironment no longer poses a challenge for state of the art algorithms. Its main shortcoming li\nin the fact that it provides only a single game to be learnt. Additionally, the environment provide:\nhand-crafted features, extracted directly from the simulator, to the algorithm. This allowed the us:\nof planning algorithms that highly outperform any learning based algorithm."}, {"section_index": "3", "section_name": "2.4 OPENAI UNIVERSE", "section_text": "Universe is a platform within the OpenAI framework in which RL algorithms car\ntrain on over a thousand games. Universe includes very advanced games such as GTA V, Portal as\nwell as other tasks (e.g. browser tasks). Unlike RLE, Universe doesn\u2019t run the games locally anc\nrequires a VNC interface to a server that runs the games. This leads to a lower frame rate and thu:\n\nlonger training times.\ne Introducing a novel RL environment with significant challenges and an easy agent evalu-\nation technique (enabling agents to compete against each other) which could lead to new\nand more advanced RL algorithms.\n\ne A new method to train an agent by enabling it to train against several opponents, making\nthe final policy more robust.\n\ne Encapsulating several different challenges to a single RL environment.\nThe OpenAI gym is an open source platform with the purpose of creating\nan interface between RL environments and algorithms for evaluation and comparison purposes.\nOpenAI Gym is currently very popular due to the large number of environments supported by it.\nFor example ALE, Go, MouintainCar and VizDoom (2016), an environment for the\nlearning of the 3D first-person-shooter game \u2019\"Doom\u201d. OpenAI Gym\u2019s recent appearance and wide\nusage indicates the growing interest and research done in the field of RL.\nMalmo (Johnson et al.}/2016) is an artificial intelligence experimentation platform of the famous\ngame \u201dMinecraft\u201d. Although Malmo consists of only a single game, it presents numerous challenges\nsince the \u2019Minecraft\u201d game can be configured differently each time. The input to the RL algorithms\n\ninclude specific features indicating the \u2019\u2019state\u201d of the game and the current reward."}, {"section_index": "4", "section_name": "2.6 DEEPMIND LAB", "section_text": "DeepMind Lab (Dee) is a first-person 3D platform environment which allows training RL algorithms\non several different challenges: static/random map navigation, collect fruit (a form of reward) and\na laser-tag challenge where the objective is to tag the opponents controlled by the in-game AI. In\nLAB the agent observations are the game screen (with an additional depth channel) and the velocity\nof the character. LAB supports four games (one game - four different modes)."}, {"section_index": "5", "section_name": "2.7 DEEP Q-LEARNING", "section_text": "In our work, we used several variant of the Deep Q-Network algorithm (DQN) (Mnih et al.|/2015)\nan RL algorithm whose goal is to find an optimal policy (i.e., given a current state, choose actior\nthat maximize the final score). The state of the game is simply the game screen, and the action is\na combination of joystick buttons that the game responds to (i.e., moving ,jjumping). DQN learn:\nthrough trial and error while trying to estimate the \u201dQ-function\u201d, which predicts the cumulative\ndiscounted reward at the end of the episode given the current state and action while following <\npolicy 7. The Q-function is represented using a convolution neural network that receives the screen\nas input and predicts the best possible action at it\u2019s output. The Q-function weights 6 are updated\naccording to:\nO441(St, dr) =O + O(Reyr + ymax(Qe(se+1, 4 1) \u2014 Qe(Se, 43 \u00a2))VoQe(se, ae; 9),\nwhere s;, 5:41 are the current and next states, a, is the action chosen, a is the step size, y is th\ndiscounting factor R,+1 is the reward received by applying a, at s;. 6\u2019 represents the previou\nweights of the network that are updated periodically. Other than DQN, we examined two leadin;\nalgorithms on the RLE: Double Deep Q-Learning (D-DQN) (Van Hasselt et al. 2015p, a DQD\nbased algorithm with a modified network update rule. Dueling Double DQN (Wang et al} 2015)\na modification of D-DQN\u2019s architecture in which the Q-function is modeled using a state (screen\ndependent estimator and an action dependent estimator.\nThe Super Nintendo Entertainment System (SNES) is a home video game console developed by\nNintendo and released in 1990. A total of 783 games were released, among them, the iconic Supe?\nMario World, Donkey Kong Country and The Legend of Zelda. Table presents a comparison\nbetween Atari 2600, Sega Genesis and SNES game consoles, from which it is clear that SNES and\nGenesis games are far more complex."}, {"section_index": "6", "section_name": "3.2 IMPLEMENTATION", "section_text": "To allow easier integration with current platforms and algorithms, we based our environment on the\nALE, with the aim of maintaining as much of its interface as possible. While the ALE is highly\ncoupled with the Atari emulator, Stellq!} RLE takes a different approach and separates the learnin:\nenvironment from the emulator. This was achieved by incorporating an interface named LibRetro\nbRetro site), that allows communication between front-end programs to game-console emulators\nCurrently, LibRetro supports over 15 game consoles, each containing hundreds of games, at an esti\nmated total of over 7,000 games that can potentially be supported using this interface. Examples of\nsupported game consoles include Nintendo Entertainment System, Game Boy, N64, Sega Genesis,\nO14.1(82, G2) = O, + (Regi + ymax(Qz (5141, a5 O)) \u2014 Qe(se, a2; O4)) VoQe (se, a3), ("}, {"section_index": "7", "section_name": "3.3. SOURCE CODE", "section_text": "RLE is fully available as open source software for use under GNU\u2019s General Public Licensq*} The\nenvironment is implemented in C++ with an interface to algorithms in C++, Python and Lua. Adding\nanew game to the environment is a relatively simple process."}, {"section_index": "8", "section_name": "3.4 RLE INTERFACE", "section_text": "RLE provides a unified interface to all games in its supported consoles, acting as an RL-wrapper to\nthe LibRetro interface. Initialization of the environment is done by providing a game (ROM file)\nand a gaming-console (denoted by \u2019core\u2019). Upon initialization, the first state is the initial frame of\nthe game, skipping all menu selection screens. The cores are provided with the RLE and installed\ntogether with the environment. Actions have a bit-wise representation where each controller button\nis represented by a one-hot vector. Therefore a combination of several buttons is possible using\nthe bit-wise OR operator. The number of valid buttons combinations is larger than 700, therefore\nonly the meaningful combinations are provided. The environments observation is the game screen,\nprovided as a 3D array of 32 bit per pixel with dimensions which vary depending on the game. The\nreward can be defined differently per game, usually we set it to be the score difference between\ntwo consecutive frames. By setting different configuration to the environment, it is possible to alter\nin-came properties such as difficulty (1.e easv. medium. hard). its characters. levels. etc.\nAtari 2600 SNES Genesis\nNumber of Games 565 783 928\nCPU speed 1.19MHz 3.58MHz 7.6 MHz\nROM size 2-4KB 0.5-6MB 16 MBytes\nRAM size 128 bytes 128KB 72KB\nColor depth 8 bit 16 bit 16 bit\nScreen Size 160x210 256x224 or 512x448 320x224\nNumber of controller buttons 5 12 11\nPossible buttons combinations 18 over 720 over 100"}, {"section_index": "9", "section_name": "3.5 ENVIRONMENT CHALLENGES", "section_text": "Integrating SNES and Genesis with RLE presents new challenges to the field of RL where visua\ninformation in the form of an image is the only state available to the agent. Obviously, SNES game:\nare significantly more complex and unpredictable than Atari games. For example in sports games\nsuch as NBA, while the player (agent) controls a single player, all the other nine players\u2019 behavior i\ndetermined by pre-programmed agents, each exhibiting random behavior. In addition, many SNES\ngames exhibit delayed rewards in the course of their play (i.e., reward for an actions is given man}\ntime steps after it was performed). Similarly, in some of the SNES games, an agent can obtain :\nreward that is indirectly related to the imposed task. For example, in platform games, such as Supe:\nMario, reward is received for collecting coins and defeating enemies, while the goal of the challengs\nis to reach the end of the level which requires to move to keep moving to the right. Moreover\nupon completing a level, a score bonus is given according to the time required for its completion\nTherefore collecting coins or defeating enemies is not necessarily preferable if it consumes too mucl\ntime. Analysis of such games is presented in section|4.2] Moreover, unlike Atari that consists o\nSaturn, Dreamcast and Sony PlayStation. We chose to focus on the SNES game console imple-\nmented using the snes9x~ P} as it\u2019s games present interesting, yet plausible to overcome challenges.\nAdditionally, we utilized the Genesis-Plus- Gx} emulator, which supports several Sega consoles:\nGenesis/Mega Drive, Master System, Game Gear and SG-1000.\nTable 1: Atari 2600, SNES and Genesis comparison\neight directions and one action button, SNES has eight-directions pad and six actions buttons. Sinc:\ncombinations of buttons are allowed, and required at times, the actual actions space may be large\nthan 700, compared to the maximum of 18 actions in Atari. Furthermore, the background in SNES\nis very rich, filled with details which may move locally or across the screen, effectively acting a\nnon-stationary noise since it provided little to no information regarding the state itself. Finally, wi\nnote that SNES utilized the first 3D games. In the game Wolfenstein, the player must navigate ;\nmaze from a first-person perspective, while dodging and attacking enemies. The SNES offers plenty\nof other 3D games such as flight and racing games which exhibit similar challenges. These game:\nare much more realistic, thus inferring from SNES games to \u201dreal world\u201d tasks, as in the case o\nself driving cars, might be more beneficial. A visual comparison of two games, Atari and SNES, i:\npresented in Figure\nFigure 1: Atari 2600 and SNES game screen comparison: Left: \u201dBoxing\u201d an Atari 2600 fighting\ngame , Right: \u2019Mortal Kombat\u201d a SNES fighting game. Note the exceptional difference in the\namount of details between the two games. Therefore, distinguishing a relevant signal from noise is\nmuch more difficult.\nTable 2: Comparison between RLE and the latest RL environments"}, {"section_index": "10", "section_name": "4.1 EVALUATION METHODOLOGY", "section_text": "The evaluation methodology that we used for benchmarking the different algorithms is the popula1\nmethod proposed by (Mnih et al.|/2015). Each examined algorithm is trained until either it reached\nconvergence or 100 epochs (each epoch corresponds to 50,000 actions), thereafter it is evaluated by\nperforming 30 episodes of every game. Each episode ends either by reaching a terminal state o1\nafter 5 minutes. The results are averaged per game and compared to the average result of a human\nplayer. For each game the human player was given two hours for training, and his performances\nwere evaluated over 20 episodes. As the various algorithms don\u2019t use the game audio in the learning\nprocess, the audio was muted for both the agent and the human. From both, humans and agents\nPUBH-START\nCharacteristics RLE\n\nOpenAI Inifinte ALE Project DeepMind\n\nUniverse Mario Malmo Lab\n\nNumber of Games: 8 out of 7000+ 1000+ I 74 I 4\n\nIn game Yes NO No No Yes Yes\nadjustments!\n\nFrame rate 530fps{SNES) 60fps 5675fps\u201d 120fps <7000fps <1000fps\n\nObservation (Input) screen, Screen hand crafted screen, handcrafted screen + depth\n\nRAM features RAM features and velocity\nscore, a random agent score (an agent performing actions randomly) was subtracted to assure tha\nlearning indeed occurred. It is important to note that DQN\u2019s e-greedy approach (select a randon\naction with a small probability \u20ac) is present during testing thus assuring that the same sequenc\nof actions isn\u2019t repeated. While the screen dimensions in SNES are larger than those of Atari, 1\nour experiments we maintained the same pre-processing of DQN (i.e., downscaling the image t\n84x84 pixels and converting to gray-scale). We argue that downscaling the image size doesn\u2019t affec\na human\u2019s ability to play the game, therefore suitable for RL algorithms as well. To handle thi\nlarge action space, we limited the algorithm\u2019s actions to the minimal button combinations whic!\nprovide unique behavior. For example, on many games the R and L action buttons don\u2019t have any\nuse therefore their use and combinations were omitted."}, {"section_index": "11", "section_name": "4.1.1 RESULTS", "section_text": "A thorough comparison of the four different agents\u2019 performances on SNES games can be seen in\nFigure (). The full results can be found in Table \u00ae). Only in the game Mortal Kombat a trained\nagent was able to surpass a expert human player performance as opposed to Atari games where the\nsame algorithms have surpassed a human player on the vast majority of the games.\nOne example is Wolfenstein game, a 3D first-person shooter game, requires solving 3D vision tasks\nnavigating in a maze and detecting object. As evident from figure 2). all agents produce poor result\nindicating a lack of the required properties. By using \u00a2-greedy approach the agents weren\u2019t able t\nexplore enough states (or even other rooms in our case). The algorithm\u2019s final policy appeared a\na random walk in a 3D space. Exploration based on visited states such as presented in|Bellemar\net al. (2016) might help addressing this issue. An interesting case is Gradius III, a side-scrolling\nflight-shooter game. While the trained agent was able to master the technical aspects of the game\nwhich includes shooting incoming enemies and dodging their projectiles, it\u2019s final score is still fa\nfrom a human\u2019s. This is due to a hidden game mechanism in the form of power-ups\u201d, which can b\naccumulated, and significantly increase the players abilities. The more power-ups collected withou\nbeing use \u2014 the larger their final impact will be. While this game-mechanism is evident to a human\nthe agent acts myopically and uses the power-up straight away]"}, {"section_index": "12", "section_name": "4.2 REWARD SHAPING", "section_text": "As part of the environment and algorithm evaluation process, we investigated two case studies. Firs\nis a game on which DQN had failed to achieve a better-than-random score, and second is a game or\nwhich the training duration was significantly longer than that of other games.\nIn the first case study, we used a 2D back-view racing game \u201dF-Zero\u201d. In this game, one is require\u00a2\nto complete four laps of the track while avoiding other race cars. The reward, as defined by the scort\nof the game, is only received upon completing a lap. This is an extreme case of a reward delay. A lay\nmay last as long as 30 seconds, which span over 450 states (actions) before reward is received. Sinct\nDQN\u2019s exploration is a simple \u00a2-greedy approach, it was not able to produce a useful strategy. We\napproached this issue using reward shaping, essentially a modification of the reward to be a functiot\nof the reward and the observation, rather than the reward alone. Here, we define the reward to b\nthe sum of the score and the agent\u2019s speed (a metric displayed on the screen of the game). Indee\u00ab\nwhen the reward was defined as such, the agents learned to finish the race in first place within a shor\ntraining period.\nThe second case study is the famous game of Super Mario. In this game the agent, Mario, is require\u00a2\nto reach the right-hand side of the screen, while avoiding enemies and collecting coins. We foun\nthis case interesting as it involves several challenges at once: dynamic background that can chang\ndrastically within a level, sparse and delayed rewards and multiple tasks (such as avoiding enemie:\nand pits, advancing rightwards and collecting coins). To our surprise, DQN was able to reach th\nend of the level without any reward shaping, this was possible since the agent receives rewards fo\nevents (collecting coins, stomping on enemies etc.) that tend to appear to the right of the player\ncausing the agent to prefer moving right. However, the training time required for convergence wai\nsignificantly longer than other games. We defined the reward as the sum of the in-game reward an\u00a2\na bonus granted according the the player\u2019s position, making moving right preferable. This rewar\u00ab\n\u00b0A video demonstration can be found at https://youtu.be/nUI9XLMveEU\nNormalized Score\n\n120\n\n100\n\n80\n\n60\n\n40\n\n20\n\n\u00b0\u00b0E-Zero (speed bonus)\n\nGradius 3\n\nMortal Kombat\n\nAlgorithms\n\nSuper Mario\n\nWolfenstein\n\nm DQN\n\u2122 D-DQN\n\u2122 Duel-DDQN\nFigure 2: DQN, DDQN and Duel-DDQN performance. Results were normalized by subtracting the\narandom agent\u2019s score and dividing by the human player score. Thus 100 represents a human player\nand zero a random agent.\nproved useful, as training time required for convergence decreased significantly. The two game\nabove can be seen in Figure (\nFigure (4) illustrates the agent\u2019s average value function . Though both were able complete the stage\ntrained upon, the convergence rate with reward shaping is significantly quicker due to the immediate\nrealization of the agent to move rightwards.\nFigure 3: Left: The game Super Mario with added bonus for moving right, enabling the agent to\nmaster them game after less training time. Right: The game F-Zero. By granting a reward for speed\nthe agent was able to master this game, as oppose to using solely the in-game reward.\n~~ 0-00\"00..\na\nREADY,\u201d =\nAveraged Action Value (Q)\n\nog\n\n06\n\n04\n\n02\n\n\u2014 Super Mario With Right Bonus\n\u2014 Super Mario Without Right Bonu:\n\n20\n\n30\n\nEpoch\n\n50\n\n60 70\nFigure 4: Averaged action-value (Q) for Super Mario trained with reward bonus for moving righ\n(blue) and without (red)."}, {"section_index": "13", "section_name": "4.3.1 MULTI-AGENT REINFORCEMENT LEARNING RESULTS", "section_text": "We chose the game Mortal Kombat, a two character side viewed fighting game (a screenshot of\nthe game can be seen in Figure (oy as a testbed for the above, as it exhibits favorable properties:\nboth players share the same screen, the agent\u2019s optimal policy is heavily dependent on the rival\u2019s\nbehavior, unlike racing games for example. In order to evaluate two agents fairly, both were trained\nusing the same characters maintaining the identity of rival and agent. Furthermore, to remove the\nimpact of the starting positions of both agents on their performances, the starting positions were\ninitialized randomly.\nIn the first experiment we evaluated all combinations of DQN against D-DQN and Dueling D-DQN\nEach agent was trained against the in-game AI until convergence. Then 50 matches were performec\nbetween the two agents. DQN lost 28 out of 50 games against Dueling D-DQN and 33 agains\nD-DQN. D-DQN lost 26 time to Dueling D-DQN. This win balance isn\u2019t far from the randon\ncase, since the algorithms converged into a policy in which movement towards the opponent is no!\n[n this section we describe our experiments with RLE\u2019s multi-agent capabilities. We consider the\ncase where the number of agents, n = 2 and the goals of the agents are opposite, as in ry = \u2014Tre.\nThis scheme is known as fully competitive (Busgoniu et al.| 2010). We used the simple single-\nagent RL approach (as described by |{Busoniu et al. (2010) section 5.4.1) which is to apply to sin-\ngle agent approach to the multi-agent case. This approach was proved useful in|Crites and Barto\nand (1997). More elaborate schemes are possible such as the minimax-Q algo-\nrithm (Littman|[1994), (Littman| {2001}. These may be explored in future works. We conducted\nthree experiments on this setup: the first use was to train two different agents against the in-game\nAI, as done in previous sections, and evaluate their performance by letting them compete against\neach other. Here, rather than achieving the highest score, the goal was to win a tournament which\nconsist of 50 rounds, as common in human-player competitions. The second experiment was to\ninitially train two agents against the in-game AI, and resume the training while competing against\neach other. In this case, we evaluated the agent by playing again against the in-game AI, separately.\nFinally, in our last experiment we try to boost the agent capabilities by alternated it\u2019s opponents,\nswitching between the in-game AI and other trained agents.\nrequired rather than generalize the game. Therefore, in many episodes, little interaction between the\ntwo agents occur, leading to a semi-random outcome.\nIn our second experiment, we continued the training process of a the D-DQN network by letting it\ncompete against the Dueling D-DQN network. We evaluated the re-trained network by playing 30\nepisodes against the in-game AI. After training, D-DQN was able to win 28 out of 30 games, yet\nwhen faced again against the in-game AI its performance deteriorated drastically (from an average of\n17000 to an average of -22000). This demonstrated a form of catastrophic forgetting (Goodfellow\nIn our third experiment, we trained a Dueling D-DQN agent against three different rivals: the in\ngame AI, a trained DQN agent and a trained Dueling-DQN agent, in an alternating manner, sucl\nthat in each episode a different rival was playing as the opponent with the intention of preventin;\nthe agent from learning a policy suitable for just one opponent. The new agent was able to achiev\na score of 162,966 (compared to the \u201cnormal\u201d dueling D-DQN which achieved 169,633). As\nnew and objective measure of generalization, we\u2019ve configured the in-game AI difficulty to be \u201dver\nhard\u201d (as opposed to the default *medium\u201d difficulty). In this metric the alternating version achieve\n83,400 compared to -33,266 of the dueling D-DQN which was trained in default setting. Thus\nproving that the agent learned to generalize to other policies which weren\u2019t observed while training"}, {"section_index": "14", "section_name": "4.4 FUTURE CHALLENGES", "section_text": "As demonstrated, RLE presents numerous challenges that have yet to be answered. In addition tc\nbeing able to learn all available games, the task of learning games in which reward delay is extreme.\nsuch as F-Zero without reward shaping, remains an unsolved challenge. Additionally, some games.\nsuch as Super Mario, feature several stages that differ in background and the levels structure. The\ntask of generalizing platform games, as in learning on one stage and being tested on the other, is\nanother unexplored challenge. Likewise surpassing human performance remains a challenge since\ncurrent state-of-the-art algorithms still struggling with the many SNES games."}, {"section_index": "15", "section_name": "5 CONCLUSION", "section_text": "We introduced a rich environment for evaluating and developing reinforcement learning algorithm:\nwhich presents significant challenges to current state-of-the-art algorithms. In comparison to othe\nenvironments RLE provides a large amount of games with access to both the screen and the in\ngame state. The modular implementation we chose allows extensions of the environment with nev\nconsoles and games, thus ensuring the relevance of the environment to RL algorithms for years t\ncome (see Table 2). We\u2019ve encountered several games in which the learning process is highh\ndependent on the reward definition. This issue can be addressed and explored in RLE as rewar\u00ab\ndefinition can be done easily. The challenges presented in the RLE consist of: 3D interpretation\ndelayed reward, noisy background, stochastic AI behavior and more. Although some algorithm:\nwere able to play successfully on part of the games, to fully overcome these challenges, an agen\nmust incorporate both technique and strategy. Therefore, we believe, that the RLE is a great platforn\nfor future RL research.\nThe authors are grateful to the Signal and Image Processing Lab (SIPL) staff for their support, Alfred\nAgrell and the LibRetro community for their support and Marc G. Bellemare for his valuable inputs.\nM. G. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count\nbased exploration and intri motivation. arXiv preprint arXiv: 1606.01868, 2016.\nM. Campbell, A. J. Hoane, and F.-h. Hsu. Deep blue. Artificial Intelligence, 134(1):57\u201483, 2002.\nlibRetro site. Libretro. www.libretro.com. Accessed: 2016-11-03.\nM. J. Matari\u00e9. Reinforcement learning in the multi-robot domain. In Robot colonies, pages 73-82\nSpringer, 1997.\nUniverse. Universe. universe.openai.com, 2016. Accessed: 2016-12-13.\nB. Bischoff, D. Nguyen-Tuong, I.-H. Lee, F. Streichert, and A. Knoll. Hierarchical reinforcement\nlearning for robot navigation. In ESANN, 2013.\n\nG. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai\ngym. arXiv preprint arXiv:1606.01540, 2016.\n\nL. Busoniu, R. BabuSka, and B. De Schutter. Multi-agent reinforcement learning: An overview. In\nInnovations in Multi-Agent Systems and Applications-1, pages 183-221. Springer, 2010.\nTE A Re BBN OLI NS 9 DIMI IR ALDINE DANN MEN SEE IIS RAR BEER REE OIE Re AA OENE Feet FM Ne\n\n.. Crites and A. Barto. Improving elevator performance using reinforcement learning. In Advances\nin Neural Information Processing Systems 8. Citeseer, 1996.\n\n. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio. An empirical investigation of\ncatastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv: 1312.6211, 2013.\n\nA. Johnson, K. Hofmann, T. Hutton, and D. Bignell. The malmo platform for artificial intelligence\nexperimentation. In International Joint Conference On Artificial Intelligence (IJCAI), page 4246,\n2016.\nV. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried-\nmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement\nlearning. Nature, 518(7540):529-533, 2015.\n\nJ. Schaeffer, J. Culberson, N. Treloar, B. Knight, P. Lu, and D. Szafron. A world championship\ncaliber checkers program. Artificial Intelligence, 53(2):273-289, 1992.\n\nS. Shalev-Shwartz, N. Ben-Zrihem, A. Cohen, and A. Shashua. Long-term planning by short-term\nprediction. arXiv preprint arXiv:1602.01580, 2016.\n\nD. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser,\nIL. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural\nnetworks and tree search. Nature, 529(7587):484\u2014489, 2016.\n\nG. Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):\n58-68, 1995.\n\nJ. Togelius, S. Karakovskiy, J. Koutnik, and J. Schmidhuber. Super mario evolution. In 2009 IEEE\nSymposium on Computational Intelligence and Games, pages 156-161. IEEE, 2009.\nUEIVETSE. UIVEIS\u00ae, UNIVEISC .OPCHAl.COML, AULO. AACCESSCG. LULO~Len~ to.\n\nH. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. CoRR,\nabs/1509.06461, 2015.\n\nZ. Wang, N. de Freitas, and M. Lanctot. Dueling network architectures for deep reinforcement\nlearning. arXiv preprint arXiv:1511.06581, 2015.\n\nY. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual\nnavigation in indoor scenes using deep reinforcement learning. arXiv preprint arXiv: 1609.05143,\n\n2016."}, {"section_index": "16", "section_name": "Appendices", "section_text": "Experimental Results\nTable 3: Average results of DON, D-DON, Dueling D-DON and a Human player\nDQN | D-DQN | Dueling D-DQN | Human\nF-Zero 3116 3636 5161 6298\nGradius III 7583 12343 16929 24440\nMortal Kombat | 83733 | 56200 169300 132441\nSuper Mario 11765 | 16946 20030 36386\nWolfenstein 100 83 40 2952"}]
rkE3y85ee
[{"section_index": "0", "section_name": "CATEGORICAL REPARAMETERIZATIOD\nWITH GUMBEL-SOFTMAX", "section_text": "Shixiang Gu*\nEric Jang\nUniversity of Cambridge\nMPI Tiibingen\n\ni i 1\nGoogle Brain\nejang@google.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Stochastic neural networks with discrete random variables are a powerful technique for representin\ndistributions encountered in unsupervised learning, language modeling, attention mechanisms, an\nreinforcement learning domains. For example, discrete variables have been used to learn probabilis\n\ntic latent representations that correspond to distinct semantic classes (Kingma et al.||2014), imag\nregions (Xu et al.|[2015), and memory locations (Graves et al.|/2014} 2016). Discret\nrepresentations are often more interpretable (Chen et al.||2016) and more computationally efficier\n\n(Rae et al than their continuous analogues.\nHowever, stochastic networks with discrete variables are difficult to train because the backprop-\nagation algorithm \u2014 while permitting efficient computation of parameter gradients \u2014 cannot be\napplied to non-differentiable layers. Prior work on stochastic gradient estimation has traditionally\nfocused on either score function estimators augmented with Monte Carlo variance reduction tech-\nniques (Paisley et al} 2012} Mnih & Gregor} |2014}|Gu et al.|/2016}|Gregor et al. 2013), or biased\npath derivative estimators for Bernoulli variables (Bengio et al.}/2013). However, no existing gra-\ndient estimator has been formulated specifically for categorical variables. The contributions of this\nwork are threefold:\nThe practical outcome of this paper is a simple, differentiable approximate sampling mechanism fo\ncategorical variables that can be integrated into neural networks and trained using standard back\npropagation.\n\u201cWork done during an internship at Google Brain.\nBen Poole\u2019\nStanford University\npoole@cs.stanford.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Categorical variables are a natural choice for representing discrete structure in the\nworld. However, stochastic neural networks rarely use categorical latent variables\ndue to the inability to backpropagate through samples. In this work, we present an\nefficient gradient estimator that replaces the non-differentiable sample from a cat-\negorical distribution with a differentiable sample from a novel Gumbel-Softmax\ndistribution. This distribution has the essential property that it can be smoothly\nannealed into a categorical distribution. We show that our Gumbel-Softmax esti-\nmator outperforms state-of-the-art gradient estimators on structured output predic-\ntion and unsupervised generative modeling tasks with categorical latent variables,\nand enables large speedups on semi-supervised classification.\n1. We introduce Gumbel-Softmax, a continuous distribution on the simplex that can approx-\nimate categorical samples, and whose parameter gradients can be easily computed via the\nreparameterization trick.\n\n2. We show experimentally that Gumbel-Softmax outperforms all single-sample gradient es-\ntimators on both Bernoulli variables and categorical variables.\n\n3. We show that this estimator can be used to efficiently train semi-supervised models (e.g.\n4)) without costly marginalization over unobserved categorical latent\nvariables.\nWe begin by defining the Gumbel-Softmax distribution, a continuous distribution over the simplex\nthat can approximate samples from a categorical distribution. Let z be a categorical variable with\nclass probabilities 7, 72, ...7,. For the remainder of this paper we assume categorical samples are\nencoded as k-dimensional one-hot vectors lying on the corners of the (k \u2014 1)-dimensional simplex:\nA*-1. This allows us to define quantities such as the element-wise mean E,,[z] = [71,..., 74] of\nthese vectors.\nThe Gumbel-Max trick (Gumbel 1954} Maddison et al. 2014) provides a simple and efficient way\n\nto draw samples z from a categorical distribution with class probabilities 7:\nexp((log(mi) + 9:)/7)\nye exp((log(7;) + 9;)/7)\n\nfori = 1,...,k.\n\nYi\n\u2014k\n\nk\nPar (Ys e+ Yk) = T(r? (S07) [1 @i/v?*?)\ni=l i=l\nThis distribution was independently discovered by|Maddison et al.|(2016), where it is referred to as\n\nthe concrete distribution. As the softmax temperature 7 approaches 0, samples from the Gumbel-\n\nSoftmax distribution become one-hot and the Gumbel-Softmax distribution becomes identical to the\ncategorical distribution p(z).\nexpectation\n\nsample\n\nCategorical Tt = 10.0\n\nfa Loe Lt Leet Lo\noe ee ee ee\n\ncategory\nFigure 1: The Gumbel-Softmax distribution interpolates between discrete one-hot-encoded categor-\nical distributions and continuous categorical densities. (a) For low temperatures (r = 0.1,7 = 0.5),\nthe expected value of a Gumbel-Softmax random variable approaches the expected value of a cate-\ngorical random variable with the same logits. As the temperature increases (r = 1.0,7 = 10.0), the\nexpected value converges to a uniform distribution over the categories. (b) Samples from Gumbel-\nSoftmax distributions are identical to samples from a categorical distribution as 7 + 0. At higher\ntemperatures. Gumbel-Softmax samples are no longer one-hot. and become uniform as 7 \u2014 oo."}, {"section_index": "3", "section_name": "2.1 GUMBEL-SOFTMAX ESTIMATOR", "section_text": "The Gumbel-Softmax distribution is smooth for 7 > 0, and therefore has a well-defined gradi-\nent \u00ae\u00a5/ax with respect to the parameters 7. Thus, by replacing categorical samples with Gumbel-\nSoftmax samples we can use backpropagation to compute gradients (see Section[3.1). We denote\n\u2018The Gumbel(0,1) distribution can be sampled using inverse transform sampling by drawing u ~\nUniform(0, 1) and computing g = \u2014 log(\u2014 log(u)).\nz = one_hot (are max [g; + log nl)\ni\nwhere g}...gx are i.i.d samples drawn from Gumbel(0, 1} We use the softmax function as a continu-\nous, differentiable approximation to arg max, and generate k-dimensional sample vectors y \u20ac A*~!\nwhere\nthis procedure of replacing non-differentiable categorical samples with a differentiable approxima-\ntion during training as the Gumbel-Softmax estimator.\nWhile Gumbel-Softmax samples are differentiable, they are not identical to samples from the corre-\nsponding categorical distribution for non-zero temperature. For learning, there is a tradeoff between\nsmall temperatures, where samples are close to one-hot but the variance of the gradients is large,\nand large temperatures, where samples are smooth but the variance of the gradients is small (Figure\nTh. In practice, we start at a high temperature and anneal to a small but non-zero temperature.\nIn our experiments, we find that the softmax temperature 7 can be annealed according to a variety\nof schedules and still perform well. If 7 is a learned parameter (rather than annealed via a fixed\nschedule), this scheme can be interpreted as entropy regularization (Szegedy et al. 2015} Pereyra\n2016), where the Gumbel-Softmax distribution can adaptively adjust the \u201cconfidence\u201d of\nproposed samples during the training process."}, {"section_index": "4", "section_name": "1.2. STRAIGHT-THROUGH GUMBEL-SOFTMAX ESTIMATOR", "section_text": "Continuous relaxations of one-hot vectors are suitable for problems such as learning hidden repre.\nsentations and sequence modeling. For scenarios in which we are constrained to sampling discret\nvalues (e.g. from a discrete action space for reinforcement learning, or quantized compression), we\ndiscretize y using arg max but use our continuous approximation in the backward pass by approxi\nmating Voz \u00a9 Vey. We call this the Straight-Through (ST) Gumbel Estimator, as it is reminiscen\n\nof the biased path derivative estimator described in (2013). ST Gumbel-Softmax allows\nigh.\n\nsamples to be sparse even when the temperature 7 is hig\nIn this section we review existing stochastic gradient estimation techniques for discrete variables\n(illustrated in Figure [2). Consider a stochastic computation graph with\ndiscrete random variable z whose distribution depends on parameter 9, and cost function f(z).\nThe objective is to minimize the expected cost L(@) = E,~p,(z)[f(z)] via gradient descent, which\n\nrequires us to estimate VoE..\u00bb, 2) [f(z)]-\n\niS)"}, {"section_index": "5", "section_name": "3.1 PATH DERIVATIVE GRADIENT ESTIMATORS", "section_text": "For distributions that are reparameterizable, we can compute the sample z as a deterministic functiot\ng of the parameters @ and an independent random variable \u00a2, so that z = g(6,\u00a2). The path-wise\ngradients from f to 6 can then be computed without encountering any stochastic nodes:\nBiased path derivative estimators can be utilized even when z is not reparameterizable. In general\nwe can approximate Voz + Vom(@), where m is a differentiable proxy for the stochastic sample\nFor Bernoulli variables with mean parameter 0, the Straight-Through (ST) estimator (Bengio et al.\napproximates m = j9(z), implying Vem = 1. For k = 2 (Bernoulli), ST Gumbel-Softma\u00bb\nis similar to the slope-annealed Straight-Through estimator proposed by (2016), bu\nuses a softmax instead of a hard sigmoid to determine the slope. (2016) considers an al\nternative approach where each binary latent variable parameterizes a continuous mixture model\nReparameterization gradients are obtained by backpropagating through the continuous variables anc\nmarginalizing out the binary variables.\nOne limitation of the ST estimator is that backpropagating with respect to the sample-independent\nmean may cause discrepancies between the forward and backward pass, leading to higher variance.\na a af Og\nSen Le) = Be [190050] = Bony, | OL 8]\nFor example, the normal distribution z ~ N(j,0) can be re-written as ps + o - N(0,1), making\nt trivial to compute 07/ay and 92/ac. This reparameterization trick is commonly applied to train-\nng variational autooencoders with continuous latent variables using backpropagation (Kingma &\nWelling} (2013} [Rezende et al.| 20146). As shown in Figure [2| we exploit such a trick in the con-\n\nstruction of the Gumbel-Softmax estimator.\n@ \u00a9\n\nOD\n\naly\n<\n\n\u00a9\n\n<> deterministic,\ndifferentiable node\nCE Stochastic node\n\n1 Forward pas\n\nA log Po\na0\n\nJ Backpropagation\nFigure 2: Gradient estimation in stochastic computation graphs. (1) Vg f(z) can be computed via\nbackpropagation if ~(@) is deterministic and differentiable. (2) The presence of stochastic node\nz precludes backpropagation as the sampler function does not have a well-defined gradient. (3)\nThe score function estimator and its variants (NVIL, DARN, MuProp, VIMCO) obtain an unbiased\nestimate of Vo f(z) by backpropagating along a surrogate loss flog po(z), where f= f(x) \u2014band\nbis a baseline for variance reduction. (4) The Straight-Through estimator, developed primarily for\nBernoulli variables, approximates Vgz ~% 1. (5) Gumbel-Softmax is a path derivative estimator for\na continuous distribution y that approximates z. Reparameterization allows gradients to flow from\nf(y) to 6. y can be annealed to one-hot categorical variables over the course of training.\nGumbel-Softmax avoids this problem because each sample y is a differentiable proxy of the corre-\nsponding discrete sample z.\nSF only requires that pg(z) is continuous in 6, and does not require backpropagating through f o1\nthe sample z. However, SF suffers from high variance and is consequently slow to converge. In\nparticular, the variance of SF scales linearly with the number of dimensions of the sample vectot\n\n(Rezende et al.|/2014a), making it especially challenging to use for categorical distributions.\nThe variance of a score function estimator can be reduced by subtracting a control variate b(z) from\nthe learning signal f, and adding back its analytical expectation p14, = E, [b(z)Vo log po(z)] to keep\nthe estimator unbiased:\nVolz [f(z)] = Ez [f(z)Vo log po(z) + (0(z) Ve log po(z) \u2014 b(z) Vo log po(z))\n\nE, [(f(z) \u2014 b(2))Vo log po(z)] + Ho\nWe briefly summarize recent stochastic gradient estimators that utilize control variates. We direc\nthe reader toJGu et al.](2016) for further detail on these techniques.\n\u00a9\n\ndifferentiable node\n9) Stochastic node alogP4\n30\n\n| Backpropagation\nVoE:z [f(z)] = E: [f(z)Vo log po(z)]\nVoEz [f(z)] = Ez [f(2) Ve log po(z) + (b(z)Vo log po(z) \u2014 b(z) Vo log po(z))]\n= E, [(f(z) \u2014 0(z)) Vo log po(z)] + be\ne NVIL (Mnih & Gregor\\/2014) uses two baselines: (1) a moving average f of f to center the\n\nlearning signal, and (2) an input-dependent baseline computed by a 1-layer neural network"}, {"section_index": "6", "section_name": "3.3. SEMI-SUPERVISED GENERATIVE MODELS", "section_text": "Semi-supervised learning considers the problem of learning from both labeled data (x,y) ~ Dz\nand unlabeled data x ~ Dy, where x are observations (i.e. images) and y are corresponding labels\n(e.g. semantic class). For semi-supervised classification, [Kingma et al.|(2014) propose a variational\nautoencoder (VAE) whose latent state is the joint distribution over a Gaussian \u201cstyle\u201d variable z\nand a categorical \u201csemantic class\u201d variable y (Figure |6] Appendix). The VAE objective trains <\ndiscriminative network q4(y|2), inference network qg(z|x,y), and generative network po (z\\y, z)\nend-to-end by maximizing a variational lower bound on the log-likelihood of the observation undet\nthe generative model. For labeled data, the class y is observed, so inference is only done on z ~\nq(z\\x, y). The variational lower bound on labeled data is given by:\nlog po (a, y) > \u2014L(a, y) = Ezngy(z|x,y) log po(aly, 2)] \u2014 KL[q(z|x, y)||po(y)p(2)]\nFor unlabeled data, difficulties arise because the categorical distribution is not reparameterizable.\n\n(2014) approach this by marginalizing out y over all classes, so that for unlabeled\n\ndata, inference is still on q3(z|x, y) for each y. The lower bound on unlabeled data is:\nlog po(x) > \u2014U(x) = Ex ngy(y,2|x) log po(zly, 2) + log po(y) + log p(z) \u2014 do(y, 2|x.\n=F as(yl2)(-L(a, y) + Hasulz)))\n\ny\nThe full maximization objective is\nJ =Eveyy~o, [-\u00a3(#, y)] + Exxpy [-U(2)] +o: Eve,y)~d, [log qo (yl2)]\nwhere a is the scalar trade-off between the generative and discriminative objectives.\nIn our first set of experiments, we compare Gumbel-Softmax and ST Gumbel-Softmax to othe\nstochastic gradient estimators: Score-Function (SF), DARN, MuProp, Straight-Through (ST), anc\nfitted to f \u2014 f (a control variate for the centered learning signal itself). Finally, variance\nnormalization divides the learning signal by max(1, of), where oF is a moving average of\nVar(].\n\nDARN 2013) uses b = f(z) + f\u2019(Z)(Z \u2014 z), where the baseline corre-\nsponds to the first-order Taylor approximation of f(z) from f(z). Z is chosen to be 1/2 for\nBernoulli variables, which makes the estimator biased for non-quadratic f, since it ignores\nthe correction term jy in the estimator expression.\n\nMuProp (Gu et al.||2016) also models the baseline as a first-order Taylor expansion: b =\nf(2) + f'(@)G = 2Z) and py = f'(Z)VoEz [z]. To overcome backpropagation through\ndiscrete sampling, a mean-field approximation fy7r(j19(z)) is used in place of f(z) to\ncompute the baseline and derive the relevant gradients.\n\nVIMCO (Mnih & Rezende}|2016) is a gradient estimator for multi-sample objectives that\n\nuses the mean of other samples 6 = 1/m Vii f (z;) to construct a baseline for each sample\n24 \u00a9 Z1zm. We exclude VIMCO from our experiments because we are comparing estimators\nfor single-sample objectives, although Gumbel-Softmax can be easily extended to multi-\n\nsample objectives.\nOne limitation of this approach is that marginalization over all k class values becomes prohibitively\nexpensive for models with a large number of classes. If D, I, G are the computational cost of sam-\npling from ga(y|z), qo(z|x, y), and po (xy, z) respectively, then training the unsupervised objective\nrequires O(D + k(I + G)) for each forward/backward step. In contrast, Gumbel-Softmax allows us\nto backpropagate through y ~ qg(y|x) for single sample gradient estimation, and achieves a cost of\nO(D+I+G) per training step. Experimental comparisons in training speed are shown in Figure5]\nSlope-Annealed ST. Each estimator is evaluated on two tasks: (1) structured output prediction anc\n(2) variational training of generative models. We use the MNIST dataset with fixed binarizatiot\nfor training and evaluation, which is common practice for evaluating stochastic gradient estimator.\n\n(Salakhutdinov & Murray} 2008} {Larochelle & Murray) 201 T).\nLearning rates are chosen from {3e\u20145, le\u20145, 3e\u20144, le\u20144, 3e\u20143, le\u20143}; we select the best learn.\ning rate for each estimator using the MNIST validation set, and report performance on the tes\nset. Samples drawn from the Gumbel-Softmax distribution are continuous during training, but are\ndiscretized to one-hot vectors during evaluation. We also found that variance normalization was nec\nessary to obtain competitive performance for SF, DARN, and MuProp. We used sigmoid activatiot\nfunctions for binary (Bernoulli) neural networks and softmax activations for categorical variables\nModels were trained using stochastic cradient descent with momentum ().9.\nThe objective of structured output prediction is to predict the lower half of a 28 x 28 MNIST digit\ngiven the top half of the image (14 x 28). This is acommon benchmark for training stochastic binary\nnetworks (SBN) (Raiko et al.| 2014} Gu et al.| 2016} Mnih & Rezende| 2016). The minimization\nobjective for this conditional generative model is an importance-sampled estimate of the likelihood\nobjective, Ej, po (nslaue) Loy doi-1 108 Po (tower|/t)], where m = 1 is used for training and m =\n1000 is used for evaluation.\nWe trained a SBN with two hidden layers of 200 units each. This corresponds to either 200 Bernoulli\nvariables (denoted as 392-200-200-392) or 20 categorical variables (each with 10 classes) with bi-\nnarized activations (denoted as 392-(20 x 10)-(20 x 10)-392).\nNegative Log-Likelihood\n*\n\nBernoulli SBN\n\nCategorical SBN\n\n\u2014+\u2014 Slope-Annealed\n\u2014\u2014 MuProp\n\n\u2014+\u2014 Gumbel-Softmax\n+ ST Gumbel-Softmax\n\nNegative Log-Likelinood\n\nSF\n\nst\n\n2 Slope-Annealed ST\n\u2014\u2014 MuProp\nGumbel-Softmax\n+ ST Gumbel-Softmax\n\nt\n\nA\n\nSteps (x1e3)\n\n(a)\n\nSteps (x1e3)\n\n(b)\n\u2014 SF\nst\n\n\u2014+\u2014 Slope-Annealed 5) \u2014s\u2014 Slope-Annealed ST\n\u2014\u2014 MuProp \u2014\u2014 MuProp\n\u2014+\u2014 Gumbel-Softmax \u2014+\u2014 Gumbel-Softmax\n\u2014\u2014 ST Gumbel-Softmax \u2014\u2014 ST Gumbel-Softmax\n\n@ @\n\n\u00a5 \u00a5\n\n7 7\n\na a\n\n8 8\n\nde de\n\nv v\n\n2 2\n\ncl cl\n\nFy Fy\n\nby by\n\nzZ zZ\n\nSteps (x1e3) Steps (x1e3)\nFigure 3: Test loss (negative log-likelihood) on the structured output prediction task with binarizec\nMNIST using a stochastic binary network with (a) Bernoulli latent variables (392-200-200-392) anc\n(b) categorical latent variables (392-(20 x 10)-(20 x 10)-392)."}, {"section_index": "7", "section_name": "1.2. GENERATIVE MODELING WITH VARIATIONAL AUTOENCODERS", "section_text": "We train variational autoencoders (Kingma & Welling}|2013), where the objective is to learn a gener-\n\native model of binary MNIST images. In our experiments, we modeled the latent variable as a single\nhidden layer with 200 Bernoulli variables or 20 categorical variables (20 x 10). We use a learned cat-\negorical prior rather than a Gumbel-Softmax prior in the training objective. Thus, the minimization\nobjective during training is no longer a variational bound if the samples are not discrete. In practice.\nAs shown in Figure [3] ST Gumbel-Softmax is on par with the other estimators for Bernoulli vari-\nables and outperforms on categorical variables. Meanwhile, Gumbel-Softmax outperforms other\nestimators on both Bernoulli and Categorical variables. We found that it was not necessary to anneal\nthe softmax temperature for this task, and used a fixed 7 = 1.\nThe temperature is annealed using the schedule 7 = max(0.5, exp(\u2014rt)) of the global training step\nt, where 7 is updated every N steps. N \u20ac {500, 1000} and r \u20ac {1e\u20145, le\u20144} are hyperparameters\nfor which we select the best-performing estimator on the validation set and report test performance.\nAs shown in Figure/4] ST Gumbel-Softmax outperforms other estimators for Categorical variables,\nand Gumbel-Softmax drastically outperforms other estimators in both Bernoulli and Categorical\nvariables.\nNegative ELBO\n\nBernoulli VAE os Categorical VAE\n\n\u2014- SF SF\n\u2014+\u2014 DARN \u2014+\u2014 DARN\nst \u2014 st\n\n2 Slope-Annealed ST\n\u2014+\u2014 MuProp\n\u2014+\u2014 Gumbel-Softmax\n\u2014\u2014 ST Gumbel-Softmax\nDaa\n\n2 Slope-Annealed ST\n\u2014+\u2014 MuProp\n\n\u2014+\u2014 Gumbel-Softmax\n\u2014\u2014 ST Gumbel-Softmax\n\nNegative ELBO\n\nSteps (x1e3)\n\nSteps (x1e3)\n\n(a) (b)\nFigure 4: Test loss (negative variational lower bound) on binarized MNIST VAE with (a) Bernoulli\nlatent variables (784 \u2014 200 \u2014 784) and (b) categorical latent variables (784 \u2014 (20 x 10) \u2014 200).\nWe apply the Gumbel-Softmax estimator to semi-supervised classification on the binary MNIST\n\ndataset. We compare the original marginalization-based inference approach (Kingma et al.}|2014|\n\nto single-sample inference with Gumbel-Softmax and ST Gumbel-Softmax.\nWe trained on a dataset consisting of 100 labeled examples (distributed evenly among each of the\n10 classes) and 50,000 unlabeled examples, with dynamic binarization of the unlabeled examples\nfor each minibatch. The discriminative model q4(y|x) and inference model q4(z|z, y) are each im-\nplemented as 3-layer convolutional neural networks with ReLU activation functions. The generative\nmodel po (xy, z) is a 4-layer convolutional-transpose network with ReLU activations. Experimental\ndetails are provided in Appendix[A]\nEstimators were trained and evaluated against several values of a = {0.1,0.2,0.3,0.8, 1.0} and\nthe best unlabeled classification results for test sets were selected for each estimator and reported\nwe find that optimizing this objective in combination with temperature annealing still minimizes\nactual variational bounds on validation and test sets. Like the structured output prediction task, we\nuse a multi-sample bound for evaluation with m = 1000.\nNegative ELBO\n\nBernoulli VAE\n\nCategorical VAE\n\nSF\n\u2014+\u2014 DARN\n\u2014 st\n\n2 Slope-Annealed ST\n\n\u2014+\u2014 MuProp\n\n\u2014+\u2014 Gumbel-Softmax\n\u2014\u2014 ST Gumbel-Softmax\nPn\n\nNegative ELBO\n\nSF\n\u2014+\u2014 DARN\n\n\u2014 st\n\n2 Slope-Annealed ST\n\u2014+\u2014 MuProp\n\n\u2014+\u2014 Gumbel-Softmax\n\u2014\u2014 ST Gumbel-Softmax\n\nSteps (x1e3)\n\n(ay\n\nSteps (x1e3)\n\nth)\nTable 1: The Gumbel-Softmax estimator outperforms other estimators on Bernoulli and Categorical\nlatent variables. For the structured output prediction (SBN) task, numbers correspond to negative\nlog-likelihoods (nats) of input images (lower is better). For the VAE task, numbers correspond to\nnegative variational lower bounds (nats) on the log-likelihood (lower is better).\nSF = DARN MuProp ST Annealed ST Gumbel-S. ST Gumbel-S.\n\nSBN (Bern.) | 72.0 59.7 58.9 58.9 58.7 58.5 59.3\nSBN (Cat.) | 73.1 67.9 63.0 61.8 61.1 59.0 59.7\nVAE (Bern.) | 112.2 110.9 109.7 116.0 111.5 105.0 111.5\n\nVAE (Cat.) | 110.6 128.8 107.0 110.9 107.8 101.5 107.8\nTable 2: Marginalizing over y and single-sample variational inference perform equally well when\napplied to image classification on the binarized MNIST dataset (Larochelle & Murray} |2011). We\nreport variational lower bounds and image classification accuracy for unlabeled data in the test set.\nwee Rs Yr eS NA\nHE OW Oy dg Oy so\nne a i)\nWW A Gy ym\nDs ii SIN aN SI\n\ny\n\n(b)\n\nn So m \u00a9 m om o\nm m A A 4 oA\n(2a5/sdays) paads\nFigure 5: Gumbel-Softmax allows us to backpropagate through samples from the posterior q4(y|2)\nproviding a scalable method for semi-supervised learning for tasks with a large number of\nclasses. (a) Comparison of training speed (steps/sec) between Gumbel-Softmax and marginaliza-\ntion on a semi-supervised VAE. Evaluations were performed on a GTX Titar\nX\u00ae GPU. (b) Visualization of MNIST analogies generated by varying style variable z across eact\nrow and class variable y across each column."}, {"section_index": "8", "section_name": "5 DISCUSSION", "section_text": "The primary contribution of this work is the reparameterizable Gumbel-Softmax distribution, whose\ncorresponding estimator affords low-variance path derivative gradients for the categorical distri-\nbution. We show that Gumbel-Softmax and Straight-Through Gumbel-Softmax are effective on\nstructured output prediction and variational autoencoder tasks, outperforming existing stochastic\ngradient estimators for both Bernoulli and categorical latent variables. Finally, Gumbel-Softmax\nenables dramatic speedups in inference over discrete latent variables."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "n Table [2] We used an annealing schedule of 7 = max(0.5,exp(\u20143e\u20145 - \u00a2)), updated every 200(\nsteps.\nIn{Kingma et al.| (2014), inference over the latent state is done by marginalizing out y and using the\nreparameterization trick for sampling from q4(z|z, y). However, this approach has a computational\ncost that scales linearly with the number of classes. Gumbel-Softmax allows us to backpropagate\ndirectly through single samples from the joint g4(y, z|x), achieving drastic speedups in training\nwithout compromising generative or classification performance. (Table[2] Figure 5).\nIn Figure|5} we show how Gumbel-Softmax versus marginalization scales with the number of cat-\negorical classes. For these experiments, we use MNIST images with randomly generated labels.\nTraining the model with the Gumbel-Softmax estimator is 2x as fast for 10 classes and 9.9x as fast\nfor 100 classes.\nWe sincerely thank Luke Vilnis, Vincent Vanhoucke, Luke Metz, David Ha, Laurent Dinh, George\nTucker, and Subhaneil Lahiri for helpful discussions and feedback.\nDEUSIYU, IN. LOUNAIU, GUY fh. WUULVINIE, Dstiillatilis Ul Plupdsdllllis SiaUlelts UMNUUSY SLULCHASL\nneurons for conditional computation. arXiv preprint arXiv: 1308.3432, 2013.\n\nXi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info.\ngan: Interpretable representation learning by information maximizing generative adversarial nets\nCoRR, abs/1606.03657, 2016.\n\nJ. Chung, S. Ahn, and Y. Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprin\narXiv: 1609.01704, 2016.\n\n-, W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the\nACM, 33(10):75-84, 1990.\nneurons for conditional computation. arXiv preprint arXiv: 1308.3432, 2013.\n\n<i Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-\ngan: Interpretable representation learning by information maximizing generative adversarial nets.\nCoRR, abs/1606.03657, 2016.\n\n. Chung, S. Ahn, and Y. Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint\narXiv: 1609.01704, 2016.\n\n>. W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the\nACM, 33(10):75-84, 1990.\n\n\\. Graves, G. Wayne, M. Reynolds, T. Harley, I. Danihelka, A. Grabska-Barwirska, S. G. Col-\nmenarejo, E. Grefenstette, T. Ramalho, J. Agapiou, et al. Hybrid computing using a neural net-\nwork with dynamic external memory. Nature, 538(7626):47 1-476, 2016.\n\n\\lex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401,\n2014.\n\n<. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep autoregressive networks.\narXiv preprint arXiv: 1310.8499, 2013.\n\n>. Gu, S. Levine, I. Sutskever, and A Mnih. MuProp: Unbiased Backpropagation for Stochastic\nNeural Networks. JCLR, 2016.\n\n3. J. Gumbel. Statistical theory of extreme values and some practical applications: a series of\nlectures. Number 33. US Govt. Print. Office, 1954.\n\n). P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv: 1312.6114,\n2013.\n\n). P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep\ngenerative models. In Advances in Neural Information Processing Systems, pp. 3581-3589, 2014.\n\n1. Larochelle and I. Murray. The neural autoregressive distribution estimator. In AJSTATS, volume 1,\npp. 2, 2011.\n\n>. J. Maddison, D. Tarlow, and T. Minka. A* sampling. In Advances in Neural Information Pro-\ncessing Systems, pp. 3086-3094, 2014.\n\n>. J. Maddison, A. Mnih, and Y. Whye Teh. The Concrete Distribution: A Continuous Relaxation\nof Discrete Random Variables. ArXiv e-prints, November 2016.\n\n\\. Mnih and K. Gregor. Neural variational inference and learning in belief networks. JCML, 31,\n2014.\n\n\\. Mnih and D. J. Rezende. Variational inference for monte carlo objectives. arXiv preprint\narXiv: 1602.06725, 2016.\n\n. Paisley, D. Blei, and M. Jordan. Variational Bayesian Inference with Stochastic Search. ArXiv\ne-prints, June 2012.\n\nsabriel Pereyra, Geoffrey Hinton, George Tucker, and Lukasz Kaiser. Regularizing neural networks\nby penalizing confident output distributions. 2016.\n\n. W Rae, J. J Hunt, T. Harley, I. Danihelka, A. Senior, G. Wayne, A. Graves, and T. P Lillicrap.\nScaling Memory-Augmented Neural Networks with Sparse Reads and Writes. ArXiv e-prints,\nOctober 2016.\n\n\u2019. Raiko, M. Berglund, G. Alain, and L. Dinh. Techniques for learning binary stochastic feedforward\nneural networks. arXiv preprint arXiv: 1406.2989, 2014.\nK. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep autoregressive networks.\narXiv preprint arXiv: 1310.8499, 2013.\n\nS. Gu, S. Levine, I. Sutskever, and A Mnih. MuProp: Unbiased Backpropagation for Stochastic\nNeural Networks. JCLR, 2016.\n\nE. J. Gumbel. Statistical theory of extreme values and some practical applications: a series of\nlectures. Number 33. US Govt. Print. Office, 1954.\n\nD. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv: 1312.6114,\n2013.\n\nD. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep\ngenerative models. In Advances in Neural Information Processing Systems, pp. 3581-3589, 2014.\nID INN IID NS el LIDIA IIIA IND SAAD AIAN BAI BEE III SID AIAN I EDDA IIIA BRD IIE\n\nD. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate infer-\nence in deep generative models. In Proceedings of The 31st International Conference on Machine\nLearning, pp. 1278-1286, 2014b.\nJ. T. Rolfe. Discrete Variational Autoencoders. ArXiv e-prints, September 2016.\nFigure 6: Semi-supervised generative model proposed by (a) Generative\nmodel po(x|y, z) synthesizes images from latent Gaussian \u201cstyle\u201d variable z and categorical class\n\nvariable y. (b) Inference model q4(y,z|x) samples latent state y,z given x. Gaussian z can be\ndifferentiated with respect to its parameters because it is reparameterizable. In previous work, when\ny is not observed, training the VAE objective requires marginalizing over all values of y. (c) Gumbel-\nSoftmax reparameterizes y so that backpropagation is also possible through y without encountering\nstochastic nodes."}, {"section_index": "10", "section_name": "B DERIVING THE DENSITY OF THE GUMBEL-SOFTMAX DISTRIBUTION", "section_text": "Here we derive the probability density function of the Gumbel-Softmax distribution with proba-\nbilities 71, ...,7,; and temperature 7. We first define the logits x; = log 7;, and Gumbel samples\nFigures and[7|describe the architecture used in our experiments for semi-supervised classification\n(Section|4.3).\nCA\n\n\u00a9\n\n\u00a9 Deterministic,\ndifferentiable node\n\nO Stochastic node\n(a)\nconv2 conv2 conv2\n5x5 5x5 5x5 FC\nX [>| stride=2 | stride=2 |\u00bb) stride=2 ->)15)>) de(y | x)\nN=32 N=64 N=128\nReLU ReLU ReLU\n(b)\nconv2 conv2 conv2\n5x5 5x5 5x5 Fo\nx,y] [> stride=2 >} stride=2 >} stride=2 POSS) ae(z | x\ny id id id 32 de |\nN=32 N=64 N=128\nReLU ReLU ReLU\n()\nconv2_T conv2_T conv2_T conv2_T\nFC| 3x3 3x3 3x3 3x3 y\n[yl] >Tea7>) stride=2 |} stride=2 [>] stride=2 [> stride=2 [>]FC(> Po | y.2)\nN=128 N=64 N=32 N=32\nFigure 7: Network architecture for (a) classification gg(y|x) (b) inference gg(z|x, y), and (c) gen-\nerative po(a|y, z) models. The output of these networks parameterize Categorical, Gaussian, and\nBernoulli distributions which we sample from.\nexp ((ai + gi)/T)\nvia exp ((2j + 9;)/T)\n\nfori =1,...,h\n\nYi"}, {"section_index": "11", "section_name": "B.1 CENTERED GUMBEL DENSITY", "section_text": "The mapping from the Gumbel samples g to the Gumbel-Softmax sample y is not invertible as the\nnormalization of the softmax operation removes one degree of freedom. To compensate for this, we\ndefine an equivalent sampling process that subtracts off the last element, (2; + gx)/7 before the\nsoftmax:\nexp ((i + gi \u2014 (te + gx))/7)\n\nfori = 1,...\n\nSoh exp ((1yj + 9 \u2014 (te + gu))/7)\nw= xt gi \u2014 (te + 9x) fori =\nconv2_T\n3x3\nstride=2\n\nconv2_T\n\nconv2_T conv2_T\n\nFC|\n\n[x,y] | 6a\n\nFCL->| po(x | y.z)\n\nN=128\n1, ++; Jk, Where g; ~ Gumbel(0, 1). A sample from the Gumbel-Softmax can then be computed as\nTo derive the density of this equivalent sampling process, we first derive the density for the \u2019cen-\ntered\u201d multivariate Gumbel density corresponding to:\n(U1, +5 UR\u20141) = | gy, p(u1, +++, Ue|9e)P(Ge)\n20\n\nk-1\n\noo\n-[- dgx p(gx) [] p(uilgn)\n~ i=l\nk-1\n-[- gi. f(9%-0) T] fee + 9,21 \u2014 us)\n7\u00b0 i=l\nk-1\n\nco\n-|/ dgy eM 8 T] et tet man ete\n\nied i=1\nWe perform a change of variables with v = e~*, so dv = \u2014e~9*dg;, and dg, = \u2014dv e% = du/v,\nand define u; = 0 to simplify notation:\ney\u2014uz\u20142y\n\nP(Uy, \u00ab5 Uk, ete dv = \u201cven \"TT ve TE BR vE\n=1\n=k\n\nra\n\n=T(k) coi i Uj So exp (ai = Uj 3\n\n=1\n\n=k\nGiven samples uj, ...,Uz,-1 from the centered Gumbel distribution, we can apply a deterministic\ntransformation h to yield the first k \u2014 1 coordinates of the sample from the Gumbel-Softmax:\n>\n|\n\u00bb\n\nYk = ( + ootsin)\nj\n\nIL\n\u00bb\nThe determinant of the Jacobian can then be computed:\nk r k k\nUE \u2014_ =\nPUY, Yr) = T(k) (1 exp (xi) i) (> exp (xi) i) oT Ty\ni=1 vi i=l i=1\n\u201cky\n\n=D (k)r* (> exp toi) J] 2 (@) /v7*\")\n\ni=l i=1\nk-1\n\n-uj\u20142p\nUys +5 Uk, -1) = (= 0) [ dv = suet \"T[ ve Ti EK VEE\ni=l\n=k\n\nexp ater + x ~) Cr t y En T(k)\nk k\n=T(k) woe exp (aj \u2014 Uj \u00e9- exp (aj \u2014 Ui )\n\ni=1\n\n5)\n\n6)\n\n7)\n\n8)\npa \u2014\u20142xP(ui/t)\n\n4a hlure_1), = ~\nYick (ua:k\u2014-1), 1ST exp(u\nWe can thus compute the probability of a sample from the Gumbel-Softmax using the change of\nvariables formula on only the first k \u2014 1 variables:\nOh\" (Yr:n\u20141)\n\nP(yi:k) = p(k\" (y1:k\u20141)) | ymca\nSo to compute the probability of the Gumbel-Softmax we need two more pieces: the inverse of h\nand its Jacobian determinant. The inverse of h i\nk-1\n\nh(yr-1) =7 X | logy; \u2014 log { 1 - Sou\nj=l"}]
HyEeMu_xx
[{"section_index": "0", "section_name": "PROGRESSIVE ATTENTION NETWORKS FOR VISUAL\nATTRIBUTE PREDICTION", "section_text": "Paul Hongsuck Seo', Zhe Lin*, Scott Cohen*, Xiaohui Shen* & Bohyung Han\n\na\n* Adobe Research\n{hsseo, bhhan}@postech.ac.kr\n{zlin, scohen, xshen}@adobe.cor"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Attentive mechanisms often play important roles in modern neural networks (NNs) especially in\ncomputer vision tasks. Many visual attention models have been introduced in the previous literature.\nand they have shown that attaching an attention to NNs can improve the accuracy in various tasks\nsuch as ae classification (Jaderberg et al. Buel) Ba et al. 2015} Mnih et al.|/2014}{Larochelle &\nThere are several motivations for incorporating attentive mechanisms in NNs. One of them is tha\nit is analogous to the perceptual process of human beings. The human visual system concentrate:\nattention to a region of interest instead of processing an entire scene. Likewise, in a neural attentiot\nmodel, we can focus processing only on attended areas of the input image. This benefits us in term:\nof computational resources; the number of hidden units may be reduced since the hidden activation:\nonly need to encode the region with attention (Mnih et al.}/2014).\nAnother important motivation is that some computer vision tasks, e.g. visual question answering\n(VQA), require identifying the object for accurate attribute prediction. For example, when the\ninput image contains multiple objects, the task should focus on the object specified by the question\nFigure[I]illustrates an example task to predict the color (answer) of a given input number (query)\nThe query specifies a particular object in the input image (number 7 in this example) for answering its\nattribute (red). To address this type of tasks, the network architecture should incorporate an attentive\nmechanism either explicitly or implicitly.\nOne of the most popular attention mechanisms for NNs is the soft attention method\n(2015), which aggregates responses in a feature map weighted by their attention probabilities (see\nAppendix [A] for more details). This process results in a single attended feature vector. Since the\nsoft attention method is fully differentiable, the entire network can be trained end-to-end with\nstandard backpropagation. However, it can only model attention to local regions with a certain size\ndepending on the receptive field of the layer chosen for attention. This makes the soft attention\nmethod inappropriate for complicated cases, where objects involve significant variations in theit\nscales, and shapes."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We propose a novel attention model which can accurately attend to target objects\n\u00bbf various scales and shapes in images. The model is trained to gradually suppress\nrrelevant regions in an input image via a progressive attentive process over multiple\nayers of a convolutional neural network. The attentive process in each layet\njetermines whether to pass or suppress features at certain spatial locations for use\nn the next layer. We further employ local contexts to estimate attention probability\nit each location since it is difficult to infer accurate attention by observing a feature\nvector from a single location only. The experiments on synthetic and real datasets\nshow that the proposed attention network outperforms traditional attention methods\nn visual attribute prediction tasks.\n(a) input image (b) first attention (c) second attention (d) third attention (e) final attention\nFigure 1: An example reference problem (with the query 7 and the answer red) and intermediat\nattention maps using our progressive attention model. It shows that attention is gradually refinec\nthrough the network layers for resolving the reference problem. Distracting patterns at smaller scale:\nare suppressed at earlier layers while those at larger scales (e.g. 9) are suppressed at later layers with\nlarger receptive fields. All attended images are independently rescaled for the visualization.\nTo overcome this limitation, we propose a novel attention network, referred to as progressive attentio\nnetwork (PAN), which enables precise attention over objects of different scales and shapes b\nattaching attentive mechanisms to multiple layers within a convolutional neural network (CNN\nMore specifically, the proposed network forces attention prediction in intermediate feature maps b\nforwarding the attended feature maps in each layer to the subsequent layers in the CNN. Since |\nfeature to be attended in the current feature map is obtained by combining lower-level features witl\nsmaller receptive fields, the network can learn to distill the precise spatial support relevant to th\ntarget objects as final attention. The contribution of this work is three-fold:\nAttention on Features The most straightforward attention mechanism is a feature based method\nwhich selects a subset of features by explicitly attaching an attention model to NN architectures. The\n\napproaches relying on this attention mechanism have improved performance in many tasks (Xu et al.\n21 Bots aaa eal B01 tn Luong\net al.|/2015} 2015} (2014). For example, they have been used to handle\nsequences of variable lengths in neural machine translation models (Bahdanau et al.|\net al.|/2015), speech recognition and handwriting generation (Graves|\n\nand manage memory access mechanisms for memory networks (Weston et al.||2015) and neural\nturing machines (Graves et al.| . When applied to computer vision tasks to resolve reference\n\nproblems, these models are designed to pay attention to CNN features corresponding to subregions\nin the input image. Image caption generation and visual question answering are typical examples\n\nbenefited from this attention mechanism (Xu et al. {2015} [Yang et al.|{2015}{Andreas et al.|/2016} [Xu\n& Saenko] .\n\nLuong\ne A novel attention model (progressive attention network) which can be learned to predict\nattention matching accurate scale and shape of a target object\n\ne Use of local contexts to improve the stability of the progressive attention model\n\ne Achievement of significant performance improvement over traditional soft and hard attentior\napproaches in query-specific visual attribute prediction tasks\nThe rest of this paper is organized as follows. We first review related work in Section[2] In Section]\nwe describe the proposed model with local context information. We then present our experimental\nresults on several datasets in Section|4]and conclude the paper in Section|5]\nAttention by Image Transformation Another stream of attention models is based on image\ntransformations. These approaches transform a regular grid and sample from the input image with\nthe transformed grid whose element corresponds to a location in the input image.\n\nand|/Mnih et al (2014) 2014) transform an input image with predicted translation param\nand a fixed scale factor ($ < Carat for image classification or multiple object recognition. \u2018Scale factor\nis also predicted in (Gregor et al.|/2015) for image generation, where the network uses Gaussian\nfilters for sampling. Spatial transformer networks (STNs) predict all six parameters of the affine\ntransformation matrix, and even extend it to a projective transformation and a 16-point thin plate\nspline transformation ). Because all these transformations used in (Jaderberg\nlet al.|[2015) involve scale factors, STNs are capable of dealing with objects in different sizes. However\nSTN is limited when there are multiple candidate regions for attention. Our model overcomes this\nproblem by formulating attention as progressive filtering on feature maps instead of assuming objects\ncan be roughly aligned by a single spatial transformation.\nMultiple Attention Processes There have been several approaches iteratively performing attentiv\nprocesses to resolve relations between targets. iteratively attend to image\nconditioned on the previous attention states for visual question answering as the objects of interes\nare often not specified explicitly in questions but implicitly in relational expressions about the targe\nobjects. Also,|Weston et al.|(2015) and|Graves et al.|(2014) incorporate attention mechanisms t\nmemory cells iteratively to retrieve different values stored in the memory. Our proposed model i\nsimilar in spirit of iterative attention but aimed at attending to a single target object via operating o1\nmultiple CNN layers progressively, i.e., attention information is predicted progressively from featur\nmaps through multiple layers of CNN to capture the fine shapes of the target object.\nIn (Jaderberg et al. , the authors also conducted an experiment with a network with multiple\ntransformer layers. However, the attention shapes of STNs are still constrained to the type of\n\ntransformation regardless of the number of transformers. In contrast, the quality of the attention\nshapes is improved through progressive attention process in the proposed method. |Stollenga et al\nintroduced a deep network which manipulates intermediate features of a fixed classifier throug!\nchannel-wise attention process. Although the channel-wise attention process is used at multiple layers\nof the network to manipulate the intermediate feature representations, they never explored spatial\nattention process. More importantly, this method requires to have an accurate pretrained classifier\nfor the target classes prior to learning attention while pretraining a general query-specific attribute\nclassifier is not trivial. It is also notable that both and\ntarget simple classification tasks without queries while we aim to tackle the query-specific attribute\nprediction task where answers from a single input image can be very different depending on the input\nquery.\nTraining Attention Models The networks with soft attention are fully differentiable and thu:\ntrainable end-to-end by backpropagation. |Xu et al. (2015) and|Zaremba & Sutskever (2015) introducec\na stochastic hard attention, where the network explicitly selects a single feature based on the predictec\nattention probability map. Because the explicit selection (or sampling) procedure is not differentiable\nREINFORCE learning rule (Williams||1992), is used to make networks trainable. Transformatiot\nbased attention models (Ba et al-]/2015{[Mnih et al.| (2014) are mostly trained by REINFORCE\nlearning rule but STN (2015) proposed a fully differentiable formulation and mad\nit possible to train end-to-end. Compared to these attention networks, the proposed network i:\nalso trainable end-to-end by the standard backpropagation without any extra techniques since ever)\noperation within the network is differentiable.\nTo overcome the limitation of existing attention models in handling variable object scales and shapes.\nwe propose a progressive attention mechanism. In the proposed model, irrelevant features at different\nscales are suppressed by attention filtering steps in different CNN layers, and computation is focused\non the features corresponding to regions of interest. At each attention layer, the model predicts an\nattention map given the input query and the current feature map via an attention module, and then the\nattention maps is multiplied to the feature maps channel-wise to obtain attended feature map. In each\nlayer, each attended feature map is then forwarded to the next layer of the CNN for construction of\nthe following feature map, which is illustrated in Figure[2| This progressive attention process allows\nus to estimate precise details of attention areas while maintaining deep representations appropriate\nfor high-level inference tasks.\nnext convolution layer (git)\n\n.\n\u00a326\n\nSe\n25\n5a\nbs\n\novo\n\nattended\n\nfeature attention attended\nfeature (ft)\n\n\u2018map (f!) probability(a') feature map (f!)\nFigure 2: Overall procedure of progressive attention. Attentive processes are repeatedly applied t\nfeature maps at multiple layers and the resulting attended feature maps are used as input featur\nmaps for the next convolution layers in CNN. Attention probabilities a! are estimated from featur\nmaps and input query. In the last attention layer, the attended feature maps are aggregated to a singl\nfeature vector (by sum pooling) and fed to the final attribute classifier."}, {"section_index": "3", "section_name": "3.1 PROGRESSIVE ATTENTIVE PROCESS", "section_text": "Let f! \u20ac R4'*xC be an output feature map of a layer 1 \u20ac {0,..., LZ} in CNN with width W,\nheight H) and C; channels, and fi \u20ac R@ bea feature at (i, 7) of the feature map f\u2019. In the proposed\nPAN, an attentive process is applied to multiple layers of CNN and we obtain the attended feature\n\nmap f! \u2014 Jf! .]. which is given by\nHere, the attention probability al , for a feature fi. is calculated by\nwhere g/,,(-) denotes the attention function with a set of parameters 04, for layer 1, shy is the\n\nattention score at (i,j) in layer J, g is the query, and o(-) is a sigmoid function. The attention\nprobability at each location is independent of others in the same feature map, where a sigmoid\nfunction is employed to constrain attention probabilities between 0 and 1. For the last layer of\nattention, we use a softmax function over the entire spatial region for final aggregation of features.\nH W\n\nH W\naj\nThe attended feature f*** obtained by such process is then used as the input to the visual attribute\nclassifier as illustrated in Figure[2]\nIn our models, we place the attention layers to the output of max pooling layers instead of every layer\nin CNN because the reduction of feature resolution within CNN mainly comes from pooling layers\nIn practice,, we can also skip the first few pooling layers and only attach the attention module to the\noutputs of last K pooling layers.\n1 1 softmax;,;(s\u2019) if l= L\nSi = Ghee (Figs Gs One) and a5 = {ea ) otherwise \u2019\nsi, s\nUnlike the soft attention model (see Appendix[A}, in the intermediate attention layers, the attended\nfeature map fii is not summed up to generate a single vector representation of the attended regions.\nInstead, the attended feature map is forwarded to the next layer as an input to compute the next\nfeature map, which is given by\nI+1 _ +1 ( gl\nf\u00b0 = 9cxn (fF : OGNN)\nThis feedforward procedure with attentive processes in CNN is repeated from the input of the CNN.\nie., f\u00b0 = I, until f\u201d is obtained. Then, the attended feature f*\u201c is finally retrieved by summing up\nall the features in the final attended feature map f~ as in soft attention, which is given by\nFigure 3: Attention estimation (a) without local context and (b) with local context. In (a), at j is\npredicted from f! , only while its spatially adjacent features are also used to estimate a! . in (b)."}, {"section_index": "4", "section_name": "3.2 MULTI-RESOLUTION ATTENTION ESTIMATION", "section_text": "[he progressive attention model is still very effective in predicting fine attention shapes as th\nattention information is aggregated over multiple layers to suppress irrelevant structures at differer\nsranularity. In lower layers, features whose receptive fields contain small distractors are suppresse\nfirst. Meanwhile, the features from a part of large distractors remain intact but passed to the next lay\u00ab\nJelaying its suppression. In higher layers, features of these large distractors would get low attentio\norobability as each feature contains information from larger receptive fields allowing the attentio\nmodule to distinguish whether the feature is from a distractor or the target object. This phenomeno\nis well demonstrated in the qualitative results in our experiments (Section 4p. An additional benefit c\norogressive attention is that it is more straightforward during inference since it is a pure feedforwar\nnetwork."}, {"section_index": "5", "section_name": "3.3. LOCAL CONTEXT", "section_text": "A basic version of PAN discussed so far predicts an attention probability al based solely on the\nfeature thy at a single feature map location. We can improve the quality of attention estimation by\nallowing the attention layers to observe a local context of the target feature. The local context Fi ; of\na feature tig is composed of its spatially adjacent features. For example, the local context can be\n\ngiven by F}; ={fl Ji -\u20146 <8 Si+6,j \u20146 <t < j +6} as illustrated in Figure[3] The attention\nscore is now predicted by the attention network with local context as\nIn this architecture, the area of the local context is given by the filter size corresponding to the\ncomposite operation of convolution followed by pooling in the next layer. The local context does no\nneed to be considered in the last layer of attention since its activations are used to compute the fina\nattended feature map. Local context improves attention prediction as it enables the centroid feature tc\nbe compared with surrounding features which makes the estimated attention more discriminative."}, {"section_index": "6", "section_name": "3.4 TRAINING PROGRESSIVE ATTENTION NETWORKS", "section_text": "Training a PAN is as simple as training a soft attention network 5) because every\noperation within the network is differentiable. The entire network is tr: nd by the standarc\nbackpropagation minimizing the binary cross entropies of the object-specific visual attributes. Wher\nwe train it from a pretrained CNN, the CNN part should always be fine-tuned together since th\u00ab\nintermediate attention maps may change the input distributions of their associated layers in CNN.\nIn Eq. @). the resolution of attention probability map a! depends on the size of the feature map\nin the corresponding layer. Due to the nature of a CNN with convolution and pooling layers, the\nresolution of a! will decrease with the increasing depth of a layer. Since the attentive processes are\nperformed over multiple layers recursively in our framework, it is possible to attend to the regions of\nspecific sizes and shapes. Note that the proposed network can exploit high-level semantics in deep\nrepresentations for inference without losing attention resolution.\nallowing the attention layers to observe a local context of the target feature. The local context /; . of\na feature J; ; 1s composed of its spatially adjacent features. For example, the local context can be\nFigure 4: Example of the MREF datasets.\n(a) Network architectures of models on MREF. Arrows rep-\nresents direct connection to next layer without attention.\nFigure 5: Detailed illustration of network architectures on MNIST Reference experiments."}, {"section_index": "7", "section_name": "4.1 MNIST REFERENCE", "section_text": "Datasets We conduct experiments on a synthetic dataset created from MNIST (LeCun et al.}/1998).\nThe synthetic dataset is referred to as MNIST Reference (MREF; Figure|4a), where each training\n\nexample is a triple of an image, a query number and its color label. The task on this dataset is to\npredict the color of the number identified by a query. Five to nine distinct MNIST numbers with\ndifferent colors in {green, yellow, white, red, blue} and scales in [0.5, 3.0] are randomly sampled\nand located in each 100 x 100 image. When coloring numbers, Gaussian noise is added to the\nreference color value. To simulate more realistic situations, we made two variants of MREF by\nchainging backgrounds to either distractors (MDIST; Figure[4b) or natural images (MBG; Figure[4c).\nBackground images in MDIST are constructed with randomly cropped 5 x 5 patches of MNIST\nimages whereas backgrounds of MBG are filled with natural scene images randomly chosen from the\n\nSUN Database 2014). The training, validation and test sets contain 30,000, 10,000 and\n10,000 images respectively.\n(a) MREF\nFigure 4: Example of the MREF datasets.\n\n(b) MDIST\n\nSTN SAN HAN PAN\nconv1 (3x3@32)\npool1 (2x2)\n+ t + att1\nconv2 (3x3@32)\npool2 (2x2)\nt + t att2\nconv3 (3x3@32)\npool3 (2x2)\nt + t att3\nconv4 (3x3@32)\npool4 (2x2)\natt (STN) att (soft) att (hard) att4\nfc (classification layer)\nNetwork architectures of models on MREF. Arrows rep-\n\nnts direct connection to next layer without attention.\n\n(c) MBG\n\nfc layer\n(fusion layer, 32 activations)\n\nfc layer\n(estimation layer, 1 activation)\n\n1\nSij\n\n(b) Architecture of attention function gi \u00a2, (+). Le\ncal contexts Fy are used only in PAN-CTX.\nExperimental Settings We implement the proposed network with and without the local context\nobservation referred to as PAN-CTX and PAN, respectively. In addition, soft attention network (SAN).\nhard attention network (HAN) {Xu et al.|/2015) and two variants of spatial transformer network\n(STN-S and STN-M) (Jaderberg et al.{/2015), are used as baseline models for comparisons. While\nSTN-S is the model with a single transformer layer, STN-M contains multiple transformer layers in\nthe network. We reimplemented SAN and STNs following the descriptions in (Xu et al.}/2015) and\n2015), respectively, and trained HAN by optimizing the marginal log-likelihood\nloss as it is more accurate and feasible due to small search space in our task. The architecture of\nimage encoding network in SAN and HAN and localization networks in STNs are all identical for fair\ncomparisons. CNN in the proposed network also has the same architecture except for the additional\nlayers for hierarchical attention. The CNN is composed of four stacks of 3 x 3 convolutions with 32\nchannels (stride 1) followed by a 2 x 2 max pooling layer (stride 2) as illustrated in Figure[5al We\nused a single fc layer for classification because the task requires simple color prediction. The attention\nfunctions g!..(-) for all models are formed as multi-laver perceptrons with two lavers (Figure|5bh\nTable 1: Performance of attention models on MREF, MDIST, and MBG datasets.\nAccuracy\n\nPrecision\n\n(a) Color prediction accuracy [%]\nMREF MDIST MBG\nSTN-S 39.10 38.32 32.27\nSTN-M 93.89 85.09 52.25\nSAN 82.94 75.73 53.77\nHAN 81.84 78.49 55.84\nPAN 95.92 91.65 69.46\nPAN-CTX | 98.51 96.02 85.55\n\n(b) True-positive ratio [%]\n| MREF MDIST MBG\nUniform 2.34 2.35 2.39\nSAN 13.61 12.56 6.73\nHAN 13.95 13.81 7.64\nPAN 17.39 13.10 8.62\nPAN-CTX | 22.59 22.80 11.01\n\n1.00 1.00 0.9\n0.95 0.95 [SN , 0.8\n0.90\n\n0.90 0.85 37\nas \u00a7 0.20 Bos\n080 20.75 Zs\n\n#4 PANCIX 0 SAN 0.70 #4 PAN_CIX 0-@ SAN 4 PAN_CTX 0-9 SAN\n0.75 +4 PAN 4 STM +4 PAN +4 STM 04 + PAN + STNM\n\n4 HAN 0.65 4 HAN 4 HAN\n0.70 0.60 03\n\n05. 10 15 20 25 3.0 05. 10 25 20 25 3.0 05. 10 #15 20 25\u00b0 3,\nscale scale Scale\n(a) Attribute prediction accuracies of different models on the test subsets in different scales.\n\nPrecision\n\n00 02 Of 06 O08 1.0 00. 02 O04 06 O08 1.0 00 02 04 06 08 1.0\nRecall Recall Recall\n\n(b) The precision-recall curves of object segmentation with attention probability.\nFigure 6: Analysis of algorithms on MREF (left), MDIST (middle), and MBG (right).\nThe function takes the concatenation of a query q, which is a one-hot vector representing the target\nobject and a feature vector f! j> and outputs an attention score Shy. In PAN-CTX, the attentior\n\nfunctions of att1l, att2 and att3 additionally take the local context Fi ; containing the adjacent\nfeatures with 6 = 2. Every model is trained from scratch.\nResults Table|la| presents color prediction accuracy of all compared algorithms. It is obviou\nthat PAN outperforms all the previous approaches with significant margins and PAN-CTX furthe\nimproves the performance by exploiting the local contexts for attention estimation. While STN-!\noften fails to predict the correct answers, STN-M learns to predict the color of the target objec\nthrough multiple transformations and shows comparable performance to PAN in MREF. Howeve\nthe performance of STN-M dramatically drops as the dataset becomes more complex and realistic\nresulting in even lower performance than SAN and HAN. Also, note that STN-S is capable o\nattending to any region attended by STN-M since both models predict attention regions by estimatin;\nan affine transformation. STN-M achieves the improvement by learning multiple transformers fron\ngradients coming from different levels of features. In contrast to those parametric models, th\nproposed network can predict attention map with more fine-grained shapes capturing the spatia\nsupport of the target object better.\nTo evaluate the scale sensitivity of each model, we divided the test images into five subsets based on\ntarget object scales with uniform interval and computed the accuracies of the models. The results\nare presented in Figure|6a] where SAN and HAN tend to predict the correct answers only in a scale\nrange between 1.0 and 2.0, while their performance is degraded significantly with wild scale changes\nSTN-M becomes vulnerable to scale variations in more realistic settings. In contrast, PAN and\nPAN-CTX are robust to scale variations due to their multi-scale attention machanism especially when\nthe local contexts are incorporated.\nUnlike STNs whose attention is constrained to rhombic regions, those models based on feature-wise\nattention maps can produce attention regions adaptive to the shapes of the target object. We evaluate\nthe attention quality of these models using two complementary criteria: true-positive ratio (TPR)\nFigure 7: Qualitative results of SAN, HAN and PAN-CTX. (a) Input images faded by attended\nfeature map (c). (b) Magnitude of activations in feature maps f! ; before attention: the activations are\nmapped to original image space by spreading activations to their receptive fields. (c) Magnitude of\nactivations in attended feature maps ft ; Which shows the effect of attention in contrast to (b). (\u00a2\n\nMagnitude of activations of the attended feature maps fi ; in its original resolution of the feature\nmap. For PAN-CTX, only last three attention layers are visualized and attentions of ealier layers\nare accumulated for visualizing higher attention layers. For HAN, (c) and (d) represent attention\nprobability because attended feature map is not available. Every image except for input image is\nrescaled into [0, 1] by (x \u2014 min)/(max \u2014 min).\nFigure[7]shows the qualitative results of the proposed method and two baselines on the MBG dataset\nThe proposed model yields accurate attention regions eventually by gradually augmenting attention\nand suppressing irrelevant regions in the image. We can observe that the proposed model could\nmaintain the high attention resolution through the progressive attention process. In contrast, the\nbaseline models attend to the target objects only once at the top layer resulting in a coarse attention in\nsize and shape. More qualitative results in these experiments are presented in Appendix/C]\nDataset Visual Genome (VG) is an image dataset containing several types of\nannotations: question/answer pairs, image captions, objects, object attributes and object relationship\nWe formulate the object attribute prediction as a multi-label classification task with reference. Given\nan input image and a query (i.e., an object category), we predict the binary attributes of individual\nobjects specified by a query. We used 827 object classes and 749 attribute classes that appear more\nInput & Outputs y iA PAN-CTX\n\nattention 2 attention 3 attention 4\nactivations in attended feature maps j; ; which shows the effect of attention 1n contrast to (b). (d)\nand precision-recall (PR) curve. TPR measures how strong attention is given to proper location by\ncomputing the ratio of the aggregated attention probability within the desired area (a.k.a., ground-\ntruth segmentation) to the attention probability in the whole image (Table|1b). PR measures the\noverlaps between ground-truth segmentations and binarized segmentation predictions constructed\nwith different thresholds (Figure|6b). Note that the proposed model with the local context observation\ngives the best results with significant margin compared to all the other methods in terms of both\ncriteria. These results suggest that PAN-CTX constructs more accurate shapes of attended regions\nthan all other attention models.\nTable 2: Weighted mAP of the attribute prediction and TPR of attentions measured with ground-truth\nbounding boxes on VG dataset.\nFigure 8: Visualization of example attentions of HAN and PAN-CTX on VG dataset. Attention maps\npresent magnitude of attended features and red boxes show ground truth bounding boxes of query.\nthan 100 times. A total of 86,674 images with 667,882 object attribute labels are used for out\nexperiment, and they are split into training, validation and test sets each containing 43,337, 8,667 and\n34,670 images. The task is challenging because scales of objects largely vary and the attributes may\nbe associated with very small objects.\nWe proposed a novel hierarchical attention network, which progressively attends to regions of interest\nthrough multiple layers of a CNN. As the model is recursively applied to multiple layers of CNN\nwith an inherent feature hierarchy, it accurately predicts regions of interest with variable sizes and\nshapes. We also incorporate local contexts into our attention network for more robust estimation\nThe proposed network can be trained end-to-end with standard error backpropagation. We tested the\nmodel on both synthetic and real datasets, and demonstrated significant performance improvement\nover existing attention methods.\n| attention only | w/ prior\n\nmAP TPR mAP TPR\nSAN 27.62 15.01 31.84 17.65\nHAN 27.72 17.24 31.93 19.70\nPAN-CTX 29.38 18.01 32.50 20.17\nQuery: shoe HAN PAN-CTX\nInput Image attention map masked image attention map 2 attention map 3 masked image\n\n5\n\nrr\nExperimental Settings and Results We mainly compare our algorithm with SAN and HAN since\nSTNs could not learn a proper attention process on VG. The transformer layers of STNs generated\npadded images of different sizes and rotations to encode the query vector to fit the query-specific\nbiases. All the networks share the same CNN architecture of VGG-16 network\nZisserman| |2015), which is pretrained on ImageNet (Deng et al.|{2009) and is further fine-tuned\non the VG dataset for the attribute prediction. For SAN and HAN, an attention layer is attached\nto the last pooling layer in VGG-16 while PAN stacks an additional attention layer with the local\ncontexts F} i,j With 5 = 2 on top of each of the last three pooling layers in VGG-16. We skip to place\nattention layers at the first two pooling layers (pooll and pool2) because the features in those layers\nare not discriminative enough to filter out.We also test models with object class conditional prior. In\nthese models, the final attended feature is fused with the query once more by a fully connected layer\nallowing the network to reflect the conditional distribution of the attributes given the query. Refer to\nAppendix|Blfor more detailed descriptions on the network architectures.\nAll three models are evaluated in terms of mean average precision (mAP) weighted by the frequen-\ncies of the attribute labels in the test set, where the computation of mAP follows PASCAL VOC\nprotocol (Everingham et al.|{2010). The proposed method consistently achieves the best weighted\nmAP scores in both experimental settings as shown in Table[2]but the gain reduces with object class\nconditional prior. Table 2Jalso shows TPR of each model measured with the ground-truth bounding\nbox for evaluating the attention qualities, and the proposed method shows the best TPR. Figure[8|\npresents the qualitative results of the proposed network and HAN on VG dataset."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Deep compositional question\nanswering with neural module networks. In CVPR, 2016.\nJimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visua\nattention. In JCLR, 2015.\nKarol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural network fe\nimage generation. In JCML, pp. 1462-1471, 2015.\nMax Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In NIPS,\npp. 2008-2016, 2015.\nRanjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie\nChen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language\nand vision using crowdsourced dense image annotations. arXiv preprint arXiv: 1602.07332, 2016\nHugo Larochelle and Geoffrey E Hinton. Learning to combine foveal glimpses with a third-order\nboltzmann machine. In N/PS, pp. 1243-1251, 2010.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imag\nrecognition. JCLR, 2015.\nJason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In JCLR, 2015.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement\nlearning. Machine learning, 8(3-4):229-256, 1992.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly\nlearning to align and translate. In JCLR, 2015.\nYann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc\ndocument recognition. Proceedings of the IEEE. 86(11):2278\u20142324. 1998.\nVolodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In N/PS\npp. 2204-2212, 2014.\nJianxiong Xiao, Krista A Ehinger, James Hays, Antonio Torralba, and Aude Oliva. Sun database:\nExploring a large collection of scene categories. International Journal of Computer Vision, pp.\n1-20, 2014.\nKelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and\nYoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In\nICML, 2015.\nZichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks fo\nimage question answering. arXiv preprint arXiv: 1511.02274, 2015.\nWojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprin\narXiv: 1505.00521, 2015."}, {"section_index": "9", "section_name": "Appendices", "section_text": "In this appendix section, we explain the soft attention network which is introduced in\nand used as one of the baseline models in the experiments. Given a feature map, the soft attention\nnetwork calculates an attention probability map and uses it to compute the attended feature for\nclassification or other tasks. Given a feature map f \u00a2 R?*\u201c*C anda query q containing information\nof where to attend, a soft attention model first obtains an attended feature map f \u00a2 R!*\u201d*C, where\nW is width, H is height, and C is the number of channels. The input feature map f is generally a\nCNN output of an input image J, which is given by\nf =CNN(J).\n= gart(Fi,j,G Patt)\nay; = softmax;;(s), O<aijy <1\nH W A W\n\nf= > Se fia = SOY eins fis:\nj j\n\ni a\nIdeally, the locations in the feature map corresponding to the receptive fields containing an objec\nof interest should have the maximum attention probability while the others have zero probabilitie\nsimilarly to the hard attention. This statement stands true only if the target object is perfectly aligne\nwith the receptive fields in terms of position and scale. In practice, however, object location and siz\nvary whereas the structure of receptive fields is fixed. Note that there exists the trade-off between th\nattention resolution and the representation power. If we choose to extract deep and high-level feature:\nwe give up high resolution in attention. On the other hand, we need to rely on shallow representation\nto increase attention resolution. This trade-off limits the performance of existing attention models."}, {"section_index": "10", "section_name": "3 NETWORK ARCHITECTURES ON VISUAL GENOME", "section_text": "In PAN, the convolution and pooling layers of VGG-16 network (Simonyan & Zisserman|\npretrained on ImageNet (Deng et al.|/2009), are used, and three additional attention Tayers attT, att:\nand att3 are stacked on_top of the last three pooling layers pool3, pool4 and pool5 respectively\nas illustrated in Figure [9a] \u2018al The attention functions of att1 and att2 take the local contexts F} jl\n\naddition to the query q and the target feature fi, to obtain the attention score s! ,. The size of the\nlocal contexts is squared with that of the receptive fields of the next three convolution layers before\nthe next attention by setting 6 = 3. Three convolutions same as the next three convolution layers ir\nCNN firstly encode the target feature and the local context, and are initiallized with the same weight:\nas in CNN (Figure|9b). This embedding is then concatenated with the one-hot query vector and fec\nto two fully connected layers, one fusing two modalities and the other estimating the attention score\nIn att3, the attention function takes the concatenation of the query and the target feature and feed i\nto two fully connected layers (Figure|9c). The attended feature f*** obtained from the last attentior\nlayer att3 is finally fed to a classification layer to predict the attributes.\nThe baseline networks also share the same architecture of CNN of VGG-16 network as in PAN\n(Figure[9ap. In SAN, the soft attention described in Appendix[Alis attached to the top of CNN. Ir\nHAN, the hard attention (Xu et al.}/2015) is attached to the top of CNN instead. The hard attention i\nFor each feature fj; \u20ac R\u00a9 at (i, 7) of the feature map f and the query q, the attention probability\nmap denoted by a = Ja; _;] is given by\nwhere gatt(-) is the attention network parameterized by 6, and s = [s; ;] is an attention score map.\nThe attention score map is normalized with softmax to produce attention probabilities a;,;. Note that\nGatt(-) can be any kind of network such as a multilayer perceptron.\nLet fi, j\u20ac R\u00ae be a vector of the attended feature map f at (i, j). Then, the attended feature denoted\nby f2tt \u00a9 RC is computed by a weighted sum of features as\nconv1_1 (3x3@64)\nconv1_2 (3x3@64) feature+context embedding\npool1 (2x2) (two 3 x 3 convolution layers)\n\nconv2_1 (3x3@128)\nconv2_2 (3x3@128) (fusion toe et etvations)\npool2 (2x2)\n\nconv3_1 (3x3@256)\nconv3_2 (3x3@256)\nconv3_3 (3x3@256)\npool3 (2x2)\n\nfc layer\n(estimation layer, 1 activation)\n\nL\nij\n\n(b) Architecture of the intermediate attention func\nGhee (-) in att] and att2 of PAN.\n\nconv4_1\nconv4_2 (3x3@512)\n\n3x3@512)\n\nconv4_3 (3x3@512) q fis\npool4 (2x2) ty)\nc layer\n3x3@512) (fusion layer, 512 activations)\nconv5_2 (3x3@512) I\nconv5_3 (3x3@512) Glew\npool5 (2x2) (estimation layer, 1 activation)\natt (soft) | att (hard) | att3 7\nfc (classification layer) st\n\n(a) Network Architectures of Models. (c) Architecture of the attention functions of SAN and |\n\noe ae\nFigure 9: Detailed illustration of network architectures on Visual Genome experiments.\nimplemented to maximize the marginal likelihood directly during training while the original pape:\nmaximized the variational lower bound of the marginal likelihood because of the large attentior\nsearch space. For testing, we also directly calculated the marginal likelihood instead of picking <\nsingle prediction with the highest attention probability. This is possible because of relatively smal\nsearch space of attention in our problem compared to the image captioning where the search space\nof attention increases exponentially depending on the lengths of sequences. The attention function:\nin the baselines consist of two fully connected layers taking the concatenation of the query and the\ntarget feature as in the attention function of att3 in PAN.\nThe proposed network and the baselines described above use the query for obtaining the attentior\nprobabilities and give us the pure strength of the attention models. However, the target objec\nclass, represented by the query, gives much more information than just attetion. It confines possible\nattributes and filters irrelevant attributes. For these reasons, we additionally experiment on a set o!\nmodels that incorporate the target object class conditional prior for the attribute prediction. In thes\u00ab\nmodels, the query is fused with the attended feature f*** by an additional fully connected layer anc\nthe fused feature is used as the input of the classification layer.\nPAN-CTX\nattention 2 attention 3 attention 4\n\nInput & Outputs\nFigure 10: The qualitative results of SAN, HAN and PAN-CTX on the MREF and MDIST datasets\nFor each example, attended images are shown in the first row and the corresponding attention maps\nare shown on the second row. In case of the progressive attention network, the last three attention\nmaps (attention 2, 3 and 4) are visualized. As can be seen, attention map at deeper layers reveal the\nevidence of aggregation over earlier attention maps.\nPAN-CTX\n\nattention 2 attention 3 attention 4\n\nSAN: yellow\nHAN:\nPAN-CTX: yellow\n~* ws\n\na\nFigure 11: More qualitative results of SAN, HAN and PAN-CTX on the MBG dataset.\nPAN-CTX\n\nInputs\nattention 3\n\nattention 2 attention 4\n\nquery: 7\n\nPAN-CTX\n\nInputs\nattention 3\n\nattention 4\n\n(b)\nFigure 12: Two common failure cases of attention models on the MBG dataset. (a) The models attend\nfo a part of a larger structure which resembles the target object. (b) The models are confused by\nbackground distractors that are similar to the target object. Although failed, the examples show that\nthe results of PAN-CTX are more visually interpretable (attended to query-like structures).\nD MORE QUALITATIVE RESULTS ON VISUAL GENOME\nInput\n\nHAN: 16.18 %\nPAN: 69.16 %\n\nQuery: sky\nAnswer: cloudy\n\n30.81 %\n\nQuery: floor\nAnswer: wooden\n\n37.86 %\n\nattention 2\n\nPAI\n\nattention 3\nFigure 13: The qualitative results of SAN, HAN and PAN-CTX on the VG dataset. For each example\nthe attended images are presented in the first row while their attended feature maps are shown in the\nsecond row. In the case of the PAN, last two attention maps are visualized where the attention maps\nat deeper layers reveal the evidence of aggregation of attention information over previous layers. The\nred boxes within the final attended images represent the ground truth bounding boxes for the query\nobject annotated in the VG dataset. Each object may have multiple bounding boxes annotated by\ndifferent annotators. The annotated answer is presented in the first column. The percentage for eact\nmethod means the probability of the GT answer for corresponding method.\nPAN-CTX\n\nInput & Query\nattention 2 attention 3\n\nHAN: 40.00 %\nPAN: 45.75%\n\nAnswer: parked\n\nSAN: 19\nHAN: 8.14%\nPAN: 68.89 %\n\nQuery\nAnswer: bare\n\nPAN: 37.06 %\nFigure 14: More qualitative results of SAN, HAN and PAN-CTX on the VG dataset."}]
SyVVJ85lg
[{"section_index": "0", "section_name": "PALEO: A PERFORMANCE MODEL FOF\nDEEP NEURAL NETWORKS", "section_text": "Evan R. Sparks\nhanggqi@cs.ucla.edu\nsparks@cs.berkeley.edu\nAlthough various scalable deep learning soltware packages have been proposed,\nit remains unclear how to best leverage parallel and distributed computing infras-\ntructure to accelerate their training and deployment. Moreover, the effectiveness\nof existing parallel and distributed systems varies widely based on the neural net-\nwork architecture and dataset under consideration. In order to efficiently explore\nthe space of scalable deep learning systems and quickly diagnose their effective-\nness for a given problem instance, we introduce an analytical performance model\ncalled PALEO. Our key observation is that a neural network architecture carries\nwith it a declarative specification of the computational requirements associated\nwith its training and evaluation. By extracting these requirements from a given\narchitecture and mapping them to a specific point within the design space of soft-\nware, hardware and communication strategies, PALEO can efficiently and accu-\nrately model the expected scalability and performance of a putative deep learning\nsystem. We show that PALEO is robust to the choice of network architecture,\nhardware, software, communication schemes, and parallelization strategies. We\nfurther demonstrate its ability to accurately model various recently published scal-\nability results for CNNs such as NiN, Inception and AlexNet."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep learning has been successfully applied in many areas including natural language processing\nand computer vision. The scale of modern datasets and the millions to billions of parameters in these\ndeep networks pose new challenges when designing computational systems that leverage parallel\nand distributed computing. Indeed, several important open questions remain:\ne How fast can we train or evaluate a model on a user\u2019s given hardware?\ne Fora given architecture, how can a user best leverage parallel and distributed computation?\n\ne How can we design a new neural network architecture that can be trained and evaluated efficient.\nunder common hardware setups?\nIn response to these fundamental questions, various software packages and systemshave beer\n\npainstakingly developed, e.g. DistBelief (Dean et al.| (2012), TensorFlow (Abadi et al.| 2015)\nMXNet (Chen et al.|/2015), SparkNet (Moritz et al.]]2015), FireCaffe (Iandola et al.|/2016). More-\n\nover, expensive benchmarking efforts, e.g., (2016), have performed brute-force pro-\nfiling on some of these deep learning systems on a handful network architectures.\nIn this work we aim to tackle these questions by taking an analytical approach to model the per-\nformance of arbitrary learning systems. Our work hinges on the observation that a neural network\narchitecture is a declarative specification of the forward and backward propagation steps required\nfor training and deploying the network. However, given this specification, there is a rich design\nspace of algorithms, hardware choices, and communications strategies to most efficiently execute\nthese specifications. We build a novel performance model called PALE! |that maps this declarative\nspecification to arbitrary points in this design space to estimate the execution time of training and\nameet@cs.ucla.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "deploying deep neural networks|| PALEO applies broadly to a wide variety of neural network archi:\n\ntectures and for arbitrary learning systems within this design space, and thus can serve as a valuable\ntool for practitioners and developers to answer the questions mentioned above.\nHardware acceleration approaches are designed to accelerate the computation of the forward anc\n\nbackward passes and often make use of specialized hardware, such as GPUs (Coates et al.|/2013), o\nmore recently custom hardware designed specifically for deep learning (Jouppi||2016). PALEO ac\ncepts constants associated with hardware as input (e.g., peak FLOPS, network bandwidth) and au\n\ntomatically adapts to changes in this input.\nSoftware acceleration via specialized libraries, e.g., cuda-convnet (Krizhevsky| |2\ncuDNN (Chetlur et al.| 2014), and highly-optimized algorithms for commonly used primitives.\n\neg., (2014) and [Lavin] (2016), can also be used to accelerate deep model training.\n\nPALEO dynamically picks among the best available implementation for each layer at execution time.\nParallelization is a natural approach to consider, and can involve training a neural network wit\nmany computational devices (e.g. CPUs, GPUs) on a single machine, or across a network. Ther\nare two major parallelization strategies when it comes to training deep neural network models <\nscale: data parallelism and model parallelism. In classical data parallel systems, each worker store\nin identical copy of the model and computes gradients only on a shard of the training examples, an\nhese gradients are aggregated to update the model. In contrast, model parallel systems shard th\nnodel itself across the workers, while the training data may be stored on each worker or sharde\nacross the workers. PALEO models both data and model parallel settings.\nCommunication schemes have also been explored to accelerate incremental model updates across\ndistributed workers. Three of the most common schemes are\n(2013): (i) the OneToAll scheme has a 2/4 T' communication time as a master node must communi-\ncate with all workers individually, where T is the time for communicating data through one link\nin the network; (ii) the Tree AllReduce scheme takes 2 log,(A)T for weights to be aggregated and\nbroadcasted to all workers following a tree topology; and (iii) the Butterfly AllReduce scheme in\nwhich all workers receive aggregated weights in log,(A)T using a butterfly network. We restrict\nthe focus of PALEO to distributed communication schemes that return equivalent results to serial\nexecutions, and we thus do not consider the recently introduced butterfly mixing scheme of|Zhao &\nCanny|(2013), or non-deterministic asynchronous parameter servers.\nWe now present PALEO, a model for the lean consumption of resources during the training of DNNs.\nPALEO decomposes the total execution time into computation time and communication time; both\nare estimated for each pass of a neural network\u2019s evaluation given user specified choices within the\ndesign space of algorithms, hardware, and communications strategies. Figure[TJillustrates the overall\nidea. The computation time is calculated from factors including the size of the computation inputs\nimposed by the network architecture, the complexity of the algorithms and operations involved in\nthe network layers, and the performance of the hardware to be used. The communication time\nis estimated based on the computational dependencies imposed by the network, the communication\nbandwidth of the hardware, and the assumed parallelization schemes. Once the network architecture\nand design space choices are fixed, all of the key factors in PALEO can be derived, and we can\nestimate execution time without actually implementing the entire network and/or an underlying\nsoftware package.\n\u201cTraining a neural network involves both forward and backward propagation, whereas deploying a trained\nnetwork on a new data point involves only forward propagation. Thus, estimating the execution time of model\ntraining encompasses both model training and deployment, and is the focus of this work.\nTraining deep neural networks can be very time and resource consuming, and it is not uncommon\nfor the training of a model to take days across tens or hundreds of machines. Several high-level\nstrategies have been proposed to accelerate this process, and these strategies collectively define the\ndesign space considered by PALEO.\nNetwork architecture\n\nSoftware framework\n\nscale-up\ncpus || SPU\ncluster\ncompute\nCPU\nCPUs |) cluster\n\nnetwork __scale-out\n\nMemory\n(Data, weights, gradients, activations)\n\nDependencies\n(Network architecture)\nComplexity\n\n(e.g. FLOP counts)\n\nOperation selection\n(e.g. GEMM, FFT, Tiled FFT )\n\nParallelization strategies\n(Model parallel, data parallel)\n\nCommunication scheme\n(OneToAl, Tree AllReduce, Buterfly AllReduce)\n\nCommunication bandwidth\n(GBis)\n\nComputation speed\n(TFLOPS)\n\nComputation\n\nCommunication\n\nExecution Time\nComplexity\n(e.g. FLOP counts)\n\nOperation selection\n(e.g. GEMM, FFT, Tiled FFT )\n\nParallelization strateaies\nSoftware framework\nSST OUST\n(OneToAll, Tree AllReduce, Butterfly AllReduce)\n\nCommunication bandwidth\n(GBis)\n\nComputation speed\n(TFLOPS)\nFigure 1: Overview of the PALEO modeling approach. PALEO decomposes execution time into\ncomputation time and communication time, which can be derived from various factors implicitly\nspecified by network architectures and hardware configurations."}, {"section_index": "3", "section_name": "3.1 COMPUTATION MODELING", "section_text": "We first describe the computation model on a single machine. The computation in a neural network\ncan be expressed as a directed graph NM = ({u},, {(u,u)}), where each node uw\u201c is\nassociated with an operation f on a device d\u00ae; each directed edge (u\u201c,u4)) represents the\ndependency that operation f\u2018) cannot be executed until f is finished. We use Pa(u\\)) to represent\nthe set of immediate parent nodes of uJ). We model each layer in the neural network as a node, and\nthe connections between layers as edges. In the following text, we omit the superscript index when\nthere is no ambiguity."}, {"section_index": "4", "section_name": "3.1.1 COMPUTATION TIME FOR A SINGLE LAYER", "section_text": "To model the runtime of a layer u, we consider the operation f and decompose the execution time\nof this operation into three terms (as shown in Figure]2h): the time to fetch the input produced by\nits parent layers R(Pa(w)); the time to perform the computation of f on the designated device d\ni.e., C(f, d); and the time to write the outputs to the local memory W(f,d). Assuming a sequentia\nexecution, the runtime for a node u can be written as a simple summation:\nT(u) = R(Pa(u)) + C(f,d) + W(f,d).\nAmong the three terms, the computation time C(f,d) is calculated as the FLOP (floating-point op-\neration) counts of the operation divided by the computation speed (FLOPS; floating-point operation\nper second) of the device: C(f,d) = FLOPs(f)/speed(d). The IO times R(Pa(u)) and W(u) are\ncalculated as the size of memory footprints involved in the computation divided by the IO bandwidth\nof the device. When inputs must be fetched from other devices, e.g. in the case of model parallelism\nthis IO bandwidth refers to the communication bandwidth between two devices. PALEO treats the\nspeed and bandwidth of devices as parameters given to the model so that users can configure ther\nto reflect user-specific configurations.\nUsing this per-layer model, we will next describe how to model the computation time of an entire\nnetwork. We will subsequently we present FLOP counts for layer operations commonly used in\nmodern DNNs in Section|4]"}, {"section_index": "5", "section_name": "3.1.2 COMPUTATION TIME FOR NETWORKS", "section_text": "We first consider simple sequential structures where layers are constructed one after another, as it\nFigure |2p. The total execution time can be calculated as the sum of execution time of all layer\nT(N) = 3, T(u). While this calculation may seem trivial at first glance, it forms the founda\ntion for modeling execution time for more complex architectures.\nFetch inputs\n\n(a)\n\nPooling\n\nConv\n\nPooling\n\nConv\n\n(b)\n\nPooling Pooling\ni i\nConv Conv\n\nU\n\ngt) Device 1 So g\u00ae\n\nPooling\n\nI\n\n(c)\nFigure 2: (a) The execution time of a node in the computation graph consists of the time for fetching\ninput, computing results, and writing results to memory. (b) An example of a sequential computation\ngraph segment. (c) An example of a parallel computation graph segment.\nParallel structures are not uncommon in DNNs; for example, the Inception model\n(2015a) contains layers that can be evaluated simultaneously, and layers on different workers car\nrun in parallel in model parallel setups (Dean et al_| . Figure[2p illustrates a parallel structure\nwhere two convolutional layers (each followed by a pooling layer) are scheduled to be executed or\ntwo devices.\nTo model computation time of parallel structures, we identify synchronization barriers before and\nafter every parallel structure and introduce a notation of supernode U = {G }4_, as a set of disjoint\nsubgraphs sandwiched by the synchronization barriers in the computation graph. When substituting\nthe subgraphs with the supernode, the network is reduced to a sequential structure described above.\nFor the supernode, the execution time T(U) is within the range (max; T(G), >, T(G)], where\nthe lower bound corresponds to perfect parallelization, the upper band orci to sequential\nexecution. Note that the execution time of a subgraph 7(G\u201c) can be calculated recursively."}, {"section_index": "6", "section_name": "3.1.3. COMPUTATION MODELING FOR LAYER OPERATIONS", "section_text": "In modern DNNs, the convolutional layer is one of the most commonly used and computation-\nally intensive type of layer. For this reason, there has been many heavily optimized implementa-\ntions (Chetlur et al. 2014} Vasilache et al.| 2015} [Lavin 2016). Deriving plausible FLOP counts\nfor other types of layers is a straightforward process, and in this section, we consider two leading\nimplementations for convolutional operations: matrix multiplication and Fast Fourier Transform.\nFollowing the notation used by |Chetlur et al.|(2014), a 2D convolutional layer during forward prop\nagation\u201d P]takes an input feature map Dv xcxHxw (which has a batch of NV input feature maps wit\nshape H x W and C channels) and a set of convolutional filters Fxxcxrxs (K filters with shap\nRx Sand C channels). It produces N x K feature maps each of shape P x Q which can be calcu\nlated from the shapes of inputs and filters together with additional striding and padding parameters\nThe FLOP counts for the convolution operation can be expressed as 2K CRSN PQ. A commonl\nused implementation is to reduce convolution operations to matrix multiplications, which can b\nefficiently computed with well-optimized SGEMM routines on various platforms. Although thes\nFLOP counts ignore auxiliary operations (e.g. indexing arithmetic in efficient implementations)\nthey nonetheless provide a good estimate of FLOP counts for matrix multiplication implementa\ntions.\nAnother implementation is based on Fast Fourier Transform (Vasilache et al.|2015): both input fea\n\nture maps and filters are transformed into the frequency domain, then element-wise multiplication:\nare performed followed by an inverse Fourier transform. This implementation introduces computa:\ntion and memory overhead in discrete Fourier transforms, but reduces the computation complexity\ntoO(NCKHW +(NC+CK+NK)HW log(HW)). Convolutional layers with large filters or <\n\u00b0Our arguments generalize to N-dimensional settings, and similar arguments apply for the backward pa:\nlarge problem size can benefit from FFT implementations. When counting FLOPs, it is not possible\nto get exact counts without knowing the underlying implementation details. In PALEO, we adopt the\ncommonly used FFT complexity 5n log, n as the FLOP counts for complex-valued transformations\nof size n (Cooley & Tukey! [1965). To account for the IO overhead caused by auxiliary memories.\nPALEO estimates the memory size required for complex-valued matrices in the frequency domain\nand incorporates it into the data reading and writing terms. For FFT-based implementations with\ntilings, PALEO estimates the number of tiles from the convolution specifications.\nThe choice of algorithm \u2014 matrix multiplication or FFT \u2014 is problem specific, as it depends on the\nfilter size, strides, input size of the convolutional layers, and memory workspace. In order to derive\nreasonable estimations for user-specific DNNs comparable to real executions, it is important for PA-\nLEO to make decisions comparable to real-world systems. Two common approaches are employec\nin existing DNN software frameworks and libraries to choose between these algorithms: (i) using\npredefined heuristics based on offline benchmarks; (ii) autotuning to empirically evaluate available\nalgorithms on the given specification. Since autotuning is tied to platform and software implementa.\ntions, for maximum generality PALEO by default takes the first approach. In particular, PALEO use:\nheuristics from cuDNN to make algorithm choices while also accounting for user preferences."}, {"section_index": "7", "section_name": "3.2 COMMUNICATION MODELING", "section_text": "We now describe our modeling for communication among multiple workers. Let |D| be the size o\ndata to be communicated between two workers, and define B as the bandwidth of the communica\ntion channel. Then the communication time can simply be written as Tyomm = |D|/B. By using\ndifferent bandwidth configurations, PALEO works for both scale-up setups (multiple GPUs on on\nmachine) and scale-out setups (multiple machines in a cluster). Moreover, in data-parallel settings\nan AllReduce operation is performed to synchronize model parameters across all workers after every\nbackward pass. PALEO considers three communications schemes: OneToAll, Tree AllReduce, anc\nButterfly AllReduce. The communication time under these three schemes is described in Section|2"}, {"section_index": "8", "section_name": "3.3. PLATFORM PERCENT OF PEA", "section_text": "Thus far, we have assumed that deep learning software platforms make perfect use of their underly\ning hardware. That is, that the CPUs and GPUs are operating at \u201cpeak FLOPS\u201d, and that network\nand IO links are fully saturated. This has allowed our model to be platform independent.\nHowever, this assumption is unreasonable in practice. For instance, achieving peak FLOPS is a\ndifficult proposition, usually requiring customized libraries developed by organizations with intimate\nknowledge of the underlying hardware, e.g., Intel\u2019s MKL 2009), ATLAS\n2005), and cuDNN. Even these specially tuned libraries may fall short of peak execution by as much\nas 40% (atl). Further, any computation done outside the scope of PALEO (e.g. job scheduling, data\ncopying) will exacerbate the observed inefficiency in practice. Sometimes such inefficiencies are\nwarranted from the perspective of ease of programmability or maintenance of the learning platforms.\nRather than trying to measure and capture every source of inefficiency in every learning framework\nwe take a small number of representative deep learning workloads which contain convolutions\npooling, dropout, and fully connected layers and run them for a short time on a single GPU. Giver\nobserved total throughput and estimated total throughput on this benchmark we fit a scaling constan\nto estimate a platform percent of peak (PPP) parameter which captures the average relative ineffi\nciency of the platform compared to peak FLOPS. Highly specialized frameworks (e.g. cuDNN) wil\nin general have a computational PPP that is close to 100%, while frameworks with higher overhead\nmay have PPP constants closer to 50% or less.\nWe follow a similar benchmarking procedure to estimate PPP for the communication link for Ten-\nsorFlow. For the FireCaffe experiments, we estimate the communication PPP based on the empirical\nresults for communication reported in Table 4 of the paper."}, {"section_index": "9", "section_name": "4 EXPERIMENTS", "section_text": "We now present empirical results which illustrate that PALEO is robust to the choice of network\narchitecture, hardware, communication schemes, and parallelization strategies.\n-estimated runtimes with actual runtimes measured from Tensor\nFlovf| (Abadi et al. execution in two popular CNN architectures: the one-tower variant 0\nAlexNet (Krizhevs bp and the 16-layer VGG network (Simonyan & Zisserman||2014). Pa\nLEO uses cuDNN heuristics to choose algorithms and the auto-tuning mechanism in TensorFlow i\ndisabled. Experiments are run on a NVIDIA TITAN X GPU with a 4 GB workspace limit.\n\nWe first_compare_ PAL\nFor convolutional and fully connected layers, we evaluate forward computation, backward compu-\ntation with respect to layer inputs, and backward computation with respect to filters separately (see\nFigure /4]in the appendix for the plots of layer-by-layer comparison.) Table[I] shows a comparison\nof full forward pass and backward pass with all layers included. PALEO\u2019s per layer estimates are\nquite close to the actual TensorFlow execution, with only one layer \u2014 \u2018fc6\u2019 \u2014 consistently being\nunderestimated by PALEOp| In spite of this issue with \u2018fc6\u2019, our full pass estimates are remarkably\naccurate.\nTable 1: Full pass time of TensorFlow and PALEO estimation on AlexNet and VGG-16\nTable 2: PALEO configurations used in the case studies.\nCase 1 Case 2 Case 3\nNet NiN Inception v3 AlexNet\nDevice NVIDIA K20X NVIDIA K20 NVIDIA K20\nWorkers Up to 128 Up to 100 Up to 8\nBandwidth 70 Gbps 10 Gbps 6 GB/s\nCommunication Tree AllReduce Parameter Server Various\nParallelization Data Parallelism Data Parallelism Hybrid\nPlatform Caffe TensorFlow cuda-convnet2\nOne Step Time\u00ae\nPALEO Estimation 1918 ms 4269 ms 402 ms\nReported Time\u201d 2275 ms - 418 ms\nExamining the TensorFlow execution with the NVIDIA profiler revealed that TensorFlow spent two-thirds\nof its reported \u2018fc6\u2019 time in transforming data layout between NHWC and NCHW when calling the underlying\ncuBLAS primitives.\n\u00b0Total time of forward pass, backward pass, and parameter update for one mini-batch on one worker.\nReported times for Cases | and 3 are derived approximately from information in the publications. For Case\n2 no run time information is provided.\nForward pass (ms) Backward pass (ms)\nULWALU Pass Ulls) DAtAWaALU Pass Ulls)\n\nAlexNet TensorFlow 44.00 155.10\nPALEO Estimation 45.96 118.44\nVGG-16 = TensorFlow 400.46 1117.48\n\nPALEO Estimation 435.46 1077.27\nWe now revisit the questions posed at the beginning of the paper and demonstrate how PALEO can\nhelp in answering them. In this subsection we present three case studies. We extract experiment se-\ntups including network architectures, hardware specifications, communication schemes, and paral-\nlelization strategies from selected publications focusing on scalability of CNNs. We then plug those\nconfigurations into PALEO and compare the simulated scalability results with the reported results in\nthe original publications. Table[2|summaries the configurations of PALEO in these experiments.\nFireCaffe (Iandola et al.|/2016) adopts the Tree AllReduce communication scheme when training ;\nNiN model (Lin et al] in data parallel settings with up to 128 servers on the Titan supercom\nputer. They report a 38 x speedup for NiN with batch size 1024 relative to single-GPU performance\nTabel3]shows the results from PALEO compared with the results reported by FireCaffe.\nTable 3: Comparison between PALEO estimation and FireCaffe for training NiN\n15b) with TensorFlow and achieved a 56x speedup with 100 workers. They apply a weak\nscaling strategy with batch size 256 to keep GPUs saturated. Although [Murray et al.| (2016) lever-\naged a distributed parameter server rather than one of the three communications schemes considered\nin PALEO, the communication cost of Butterfly AllReduce can be viewed as a lower bound\n[Canny] |2013). To account for the fact that they train with worker nodes each of which have 8 GPUs,\nwe assumes a linear speedup for GPUs on the same host. Figure Ba|shows a comparison between\nreported speedups and PALEO estimated speedups. For absolute runtime, in one of the experiments,\ntheir model completes 20 epochs of training after 100 hours when using 8 Tesla K40\u2019s and a batch\nsize 256. PALEO projects a 111 hours runtime under the same setting.\n\nreported their results in synchronously training the Inception model"}, {"section_index": "10", "section_name": "4.2.3 CASE 3: ALEXNET WITH HYBRID PARALLELISM", "section_text": "(2014b) describes a hybrid model and data parallelism approach for training AlexNet\nusing up to 8 GPUs with a weak scaling strategy. In his setup, each of the two CPUs connects to 4\nGPUs, the communication bandwidth is penalized by 50% across the two groups as mentioned in\nthe paper. Table[4] shows the comparison between PALEO\u2019s projection and the original result, which\nare quite similar. Moreover, whereas [Krizhevsky| (20146) does not quantify the speedup of hybrid\nparallelism relative to strict data parallelism, PALEO simulates training the entire network with only\ndata parallelism (see last two columns of Table/4) in order to estimate this speedup.\nTable 4: Comparison between PALEO estimation and|Krizhevsky (2014b) for training AlexNet\nIn this subsection, we use PALEO in two hypothetical setups to analyze the scalability of AlexNet\nand a GAN model under different communication schemes.\nFireCaffe PALEO Estimation\nWorkers Batch size | Train Time Speedup | Train Time Speedup\n\nI 256 5.8 days 1x 4.9 days 1x\n32 256 11 hours 13x 7.6 hours 15.5x\n32 1024 6 hours 23x 4.6 hours 25.3x\n\n128 1024 3.6 hours 39x 2.3 hours 51.6x\nOne Weird Trick PALEO Estimation\n\nHybrid parallelism Hybrid parallelism Data parallelism\nWorkers | Train Time (h) Speedup | Train Time (h) Speedup | Train Time (h) Speedup\n1 98.95 1x 96.31 1x 96.31 Ix\n\n2 50.24 1.95x 49.57 1,94 55.90 1.72\n4 26.20 3.74x 25.42 3.79x 32.82 3.03 x\n8 16.68 6.25x 14,37 6.70x 23.65 5.40x"}, {"section_index": "11", "section_name": "4.3.1 ALEXNET IN A CLOUD-BASED SETUP", "section_text": "In this study, we present an analysis of data parallel training of AlexNet. We assume a modern clouc\nsetup with a cluster of servers each equipped with a NVIDIA K80 GPU connected to a 20 Gbp:\nnetwork. In contrast to the Inception model with 23 million parameter, the one-tower variant o:\nAlexNet has 50 million parameters and therefore doubles communication workload when training\nwith data parallelism.\nWe show strong scaling for all three communication schemes in Figure Bc} Even when assuming\na fairly large batch size of 2048 which is beneficial in distributed settings, we see very modes\nspeedups. The OneToAll scheme achieves a max speedup of less than a 2x using 4 workers, while\nthe communication-efficient Butterfly AllReduce scheme achieves a max speedup of roughly 5x\nwhen using 32 workers. The weak scaling results, shown in Figure 3b} show drastically improvec\nscaling results, as we observe nearly linear speedups as we increase the number of workers. How-\never, it is important to note that we are increasing the effective batch size as we increase the number:\nof workers, and it is well-known that training with large effective batch-sizes can yield models witt\nsubstandard accuracy\n\u2018Speedup\n\n100, 120, \u00b0\nPaleo: OnsTol S\"Onetoait Onetoat s|[\u2014 oOnetoat\n80, \u2014 Paleo: Tre alReduce 311] tree attReduce Tree AllRedace gil|\u2014 toate |]\nPaleo: Butt AlReduce so||\u2014_ Bury attReaue \u2014_Buwery Reduce \u00a5 6||\u2014 suterty ateduce\n60}) \u2014 stray at 2016) & &s\nz Bal\n\u201c i 5\n\na as\n\na\n\nWorkers\n\n(a) Inception / weak\n\nTF 15 as rT\nWorkers\n\n(b) AlexNet / weak\n\na\nWorkers\n\n(c) AlexNet / strong\n\nTas\n\nWorkers\n\n(d) GAN / strong\nFigure 3: Comparison of PALEO projected speedups for various networks under different scaling\nstrategies and communication schemes. (a-b) weak scaling. (c-d) strong scaling."}, {"section_index": "12", "section_name": "4.3.2 GAN ARCHITECTURE", "section_text": "We introduced PALEO \u2014 an analytical performance model for exploring the space of scalable deey\nlearning systems. By extracting computational requirements carried by neural network architecture:\nand mapping them to the design space of software, hardware, and communication strategies, PA.\nLEO can effectively and accurately model the expected scalability and performance of a putative\ndeep learning system.\nPALEO can be applied to architectures other than CNNs. We profile a generative adversarial network\n(GAN) inspired by ) for the LSUN dataset with the same hardware assumptions\nas the previous case study. Table]5/shows that PALEO estimations are close to empirical TensorFlow\nrun time for both the discriminator and generator networks. Figure|3d]plots the estimated speedups\nfor training the model with a batch size 256 on up to 128 workers under strong scaling. With-\nout communication-intensive fully-connected layers, while training this GAN architecture is more\nscalable than AlexNet, PALEO still only predicts an 8x sub-linear speedup with 64 workers.\nTable 5: Full pass time of the discriminator and generator in a GAN architecture.\nForward pass (ms) Backward pass (ms\nSUE WEEN PASS LISS MECN WEE Pa Vii,\nDiscriminator TensorFlow 30.19 77.39\n\nPALEO Estimation 27.55 79.25\nGenerator TensorFlow 110.11 374.18\n\nPALEO Estimation\n\n117.02\n\n324.49"}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Atlas timings. URL http: //math-atlas.sourceforge.net/timing/\nMartin Abadi et al. Tensorflow: Large-scale machine learning on heterogeneous systems, 2015. Softwa\navailable from tensorflow. org, 2015.\nThomas Breuel. The effects of hyperparameters on sgd training of neural networks. arXiv: 1508.02788, 2015\nTiangi Chen et al. Mxnet: A flexible and efficient machine learning library for heterogeneous distribute\nsystems. arXiv:1512.01274, 2015.\nSharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evat\nShelhamer. cudnn: Efficient primitives for deep learning. arXiv: 1410.0759, 2014.\neffrey Dean et al. Large scale distributed deep networks. In NIPS, pp. 1223-1231, 2012\nAlex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv: 1404.5997, 2014b.\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv: 1312.4400, 2013.\nHuasha Zhao and John Canny. Butterfly mixing: Accelerating incremental-update algorithms on clusters. In\nSIAM Conf. on Data Mining. SIAM, 2013.\nntel Math Kernel Library. Reference Manual. Intel Corporation, 2009. Santa Clara, USA. ISBN 630813:\n054US.\nForrest N Iandola, Khalid Ashraf, Mattthew W Moskewicz, and Kurt Keutzer. Firecaffe: near-linear accelera-\ntion of deep neural network training on compute clusters. In CVPR, 2016.\nPhilipp Moritz, Robert Nishihara, Ion Stoica, and Michael I Jordan. Sparknet: Training deep networks in spark.\narXiv:1511.06051, 2015.\nWe include supplementary figures in appendix due to the space constraint.\nForward\n\nBackward wrt inputs\n\nBackward wrt filters\n\nconvl\n\nconv2\n2 conv3\n5 conv\n2 convs\na tee\n\nfc8\n\nlm Paleo Estimation\nmmm TensorFlow\n\n2 4 6 8 10 12 14 0 10 20 30 40 50 60 5 10 15 20\nTime (ms) Time (ms) Time (ms)\n(a) Layer-wise comparison in AlexNet.\nForward Backward wrt inputs Backward wrt filters\n10 20 30 40 50 60 70 0 50 100 150 20 40 60 80 100\nTime (ms) Time (ms) Time (ms)\n\n(b) Layer-wise comparison in VGG-16."}]
rJY0-Kcll
[{"section_index": "0", "section_name": "OPTIMIZATION AS A MODEL FOR\nFEW-SHOT LEARNING", "section_text": "Sachin Ravi* and Hugo Larochelle\nTwitter, Cambridge, USA\nfsachinr, hugo}@twitter.com\nThough deep neural networks have shown great success in the large data domain\nthey generally perform poorly on few-shot learning tasks, where a classifier has t\nquickly generalize after seeing very few examples from each class. The genera\nbelief is that gradient-based optimization in high capacity classifiers requires mam\niterative steps over many examples to perform well. Here, we propose an LSTM\nbased meta-learner model to learn the exact optimization algorithm used to trai\nanother /earner neural network classifier in the few-shot regime. The parametriza\ntion of our model allows it to learn appropriate parameter updates specifically fo\nthe scenario where a set amount of updates will be made, while also learning |\ngeneral initialization of the learner (classifier) network that allows for quick con\nvergence of training. We demonstrate that this meta-learning model is competitiv\nwith deep metric-learning techniques for few-shot learning."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "ihere seem to be tWO Main reasons why gradient-Ddased OpunuZauion Talis in the race OF Tew ta-\nbeled examples. Firstly, the variants of gradient-based optimization algorithms, such as momentum\nba 203), wee Adagrad (Duchi et al.| 2011), Adadelta (2012), and ADAM (Kingma &\nBa] |2014), weren\u2019t designed specifically to perform well under the constraint of a set number of\nupdates. Specifically when applied to non-convex optimization problems, with a reasonable choice\nof hyperparameters these algorithms don\u2019t have very strong guarantees of speed of convergence,\nbeyond that they will eventually converge to a good solution after what could be many millions of\niterations. Secondly, for each separate dataset considered, the network would have to start from a\nrandom initialization of its parameters, which considerably hurts its ability to converge to a good\nsolution after a few updates. Transfer learning (C: 5\n2013) can be applied to alleviate this problem by fine-tuning a pre-trained network from another task\nwhich has more labelled data; however, it has been observed that the benefit of a pre-trained network\n\ngreatly decreases as the task the network was trained on diverges from the target task (Yosinski et al.\n\n2014). What is needed is a systematic way to learn a beneficial common initialization that would"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deep learning has shown great success in a variety of tasks with large amounts of labeled data in\nimage classification (He et al.|{2015), machine translation (Wu et al.|{2016), and speech modeling\nOord et al.|/2016). These achievements have relied on the fact that optimization of these deep,\nhigh-capacity models requires many iterative updates across many labeled examples. This type of\noptimization breaks down in the small data regime where we want to learn from very few labeled\nexamples. In this setting, rather than have one large dataset, we have a set of datasets, each with few\nannotated examples per class. The motivation for this task lies not only in the fact that humans, even\nchildren, can usually generalize after just one example of a given object, but also because models\nexcelling at this task would have many useful applications. Firstly, they would help alleviate data\ncollection as we would not require millions of labeled examples to attain reasonable performance.\nFurthermore, in many fields, data exhibits the characteristic of having many different classes but few\nexamples per class. Models that are able to generalize from few examples would be able to capture\nthis type of data effectively.\n*Work done as an intern at Twitter. Sachin is a PhD student at Princeton University and can be reached at\nsachinr@princeton.edu.\nserve as a good point to start training for the set of datasets being considered. This would provide the\nsame benefits as transfer learning, but with the guarantee that the initialization is an optimal starting\npoint for fine-tuning.\nPrevious work has suggested one manner in which to acquire quick knowledge from few examples\nthrough the idea of meta-learning [1997). Meta-learning suggest:\nframing the learning problem at two levels. The first is quick acquisition of knowledge within eacl\nseparate task presented. This process is guided by the second, which involves slower extraction 0\ninformation learned across all the tasks.\nWe present a method here that addresses the weakness of neutral networks trained with gradient-\nbased optimization on the few-shot learning problem by framing the problem within a meta-learning\nsetting. We propose an LSTM-based meta-learner optimizer that is trained to optimize a learner\nneural network classifier. The meta-learner captures both short-term knowledge within a task and\nlong-term knowledge common among all the tasks. By using an objective that directly captures an\noptimization algorithm\u2019s ability to have good generalization performance given only a set number\nof updates, the meta-learner model is trained to converge a learner classifier to a good solution\nquickly on each task. Additionally, the formulation of our meta-learner model allows it to learn a\ntask-common initialization for the learner classifier, which captures fundamental knowledge shared\namong all the tasks."}, {"section_index": "3", "section_name": "2 TASK DESCRIPTION", "section_text": "In meta-learning, we thus have different meta-sets for meta-training, meta-validation, and meta.\ntesting (Ameta\u2014trains Imeta\u2014validation, ANd Ymeta\u2014test\u00bb Tespectively). ON Ameta\u2014trains We are\ninterested in training a learning procedure (the meta-learner) that can take as input one of its train-\ning sets Di;ain and produce a classifier (the learner) that achieves high average classification perfor.\nmance on its corresponding test set Dies. Using Mmeta\u2014validation We can perform hyper-parametet\nselection of the meta-learner and evaluate its generalization performance on J,,,.;,,_\n\nde\nFor this formulation to correspond to the few-shot learning setting, each training set in datasets\nD \u20ac Y will contain few labeled examples (we consider k = 1 or k = 5), that must be used to\ngeneralize to good performance on the corresponding test set. An example of this formulation is\ngiven in Figure[I]"}, {"section_index": "4", "section_name": "3. MODEL", "section_text": "We now move to the description of our proposed model for meta-learning."}, {"section_index": "5", "section_name": "3.1 MODEL DESCRIPTION", "section_text": "Consider a single dataset, or episode, D \u20ac Aneta\u2014train. Suppose we have a learner neural ne\nclassifier with parameters @ that we want to train on Diyain. The standard optimization algorithm:\nused to train deep neural networks are some variant of gradient descent, which uses updates of the\nform\nWe first begin by detailing the meta-learning formulation we use. In the typical machine learning\nsetting, we are interested in a dataset D and usually split D so that we optimize parameters @ on a\ntraining set Di;ain and evaluate its generalization on the test set D;e5;. In meta-learning, however,\nwe are dealing with meta-sets J containing multiple regular datasets, where each D \u20ac J has a split\nof Dirain and Drest-\nWe consider the k-shot, N-class classification task, where for each dataset D, the training set con-\nsists of k labelled examples for each of N classes, meaning that Di;ain consists of k - N examples,\nand Dyest has a set number of examples for evaluation. We note that previous work\n2016) has used the term episode to describe each dataset consisting of a training and test set.\nOy = O11 _ arVo,_,Lt,\nMeta-\nTrain\n\nmeta\u2014train\n\nMeta-\n\nTest\nDrreta\u2014test\nFigure 1: Example of meta-learning setup. The top represents the meta-training set Pmeta\u2014trains\nwhere inside each gray box is a separate dataset that consists of the training set Di;ain (left side of\ndashed line) and the test set Dies (right side of dashed line). In this illustration, we are considering\nthe 1-shot, 5-class classification task where for each dataset, we have one example from each of\n5 classes (each given a label 1-5) in the training set and 2 examples for evaluation in the test set.\nThe meta-test set Pyneta\u2014test is defined in the same way, but with a different set of datasets that\ncover classes not present in any of the datasets in Pmeta\u2014train (similarly, we additionally have a\nmeta-validation set that is used to determine hyper-parameters).\nwhere 6,1 are the parameters of the learner after t \u2014 1 updates, a, is the learning rate at time t\n\u00a3, is the loss optimized by the learner for its te update, Vo,_, Ly is the gradient of that loss with\nrespect to parameters @,_,, and @, is the updated parameters of the learner.\nOur key observation that we leverage here is that this update resembles the update for the cell state\n\nin an LSTM (Hochreiter & Schmidhuber|!1997)\n= frOaith \u00a9 &,\n\nif fp = 1,c1-1 = 1-1, = cy, and & = \u2014Vo,_, Le.\nThus, we propose training a meta-learner LSTM to learn an update rule for training a neural net-\nwork. We set the cell state of the LSTM to be the parameters of the learner, or c, = 6;, and the\ncandidate cell state \u00a2, = Vo,_,\u00a31, given how valuable information about the gradient is for opti-\nmization. We define parametric forms for i, and f; so that the meta-learner can determine optimal\nvalues through the course of the updates.\nAs for f;, it seems possible that the optimal choice isn\u2019t the constant 1. Intuitively, what would\njustify shrinking the parameters of the learner and forgetting part of its previous value would be\nif the learner is currently in a bad local optima and needs a large change to escape. This would\ncorrespond to a situation where the loss is high but the gradient is close to zero. Thus, one proposal\nfor the forget gate is to have it be a function of that information, as well as the previous value of the\nforget gate:\nfr=o (We: {Vo.1Le, Le, 91-1, fr-1| + br) :\nAdditionally, notice that we can also learn the initial value of the cell state cy for the LSTM, treating\nit as a parameter of the meta-learner. This corresponds to the initial weights of the classifier (that\nip =O (Wr: |Vo,_, Le, Le, 4-1, 4-1] + br) ,\nmeaning that the learning rate is a function of the current parameter value 0,1, the current gradient\nVo,_, \u00a31, the current loss \u00a3;, and the previous learning rate i,_;. With this information, the meta-\nlearner should be able to finely control the learning rate so as to train the learner quickly while\navoiding divergence."}, {"section_index": "6", "section_name": "3.2 PARAMETER SHARING & PREPROCESSING", "section_text": "Because we want our meta-learner to produce updates for deep neural networks, which consist\nof tens of thousands of parameters, to prevent an explosion of meta-learner parameters we need to\nemploy some sort of parameter sharing. Thus as in[Andrychowicz et al.|(2016), we share parameters\nacross the coordinates of the learner gradient. This means each coordinate has its own hidden and\ncell state values but the LSTM parameters are the same across all coordinates. This allows us to\nuse a compact LSTM model and additionally has the nice property that the same update rule is used\nfor each coordinate, but one that is dependent on the respective history of each coordinate during\noptimization. We can easily implement parameter sharing by having the input be a batch of gradient\ncoordinates and loss inputs (Vo, ,\u00a3:, \u00a3) for each dimension i.\nBecause the different coordinates of the gradients and the losses can be of very different magnitudes.\nwe need to be careful in normalizing the values so that the meta-learner is able to use them properly\nduring training. Thus, we also found that the preprocessing method of {Andrychowicz et al.| (2016)\nworked well when applied to both the dimensions of the gradients and the losses at each time step:\nThe question now is how do we train the LSTM meta-learner model to be effective at few-sho\nlearning tasks? As observed in|Vinyals et al-|(2016), in order to perform well at this task, it is key\nto have training conditions match those of test time. During evaluation of the meta-learning, for\neach dataset (episode), D = (Dirain, Dtest) \u20ac Bmeta\u2014test, a good meta-learner model will, giver\na series of learner gradients and losses on the training set Di;-ain, Suggest a series of updates for the\nclassifier that pushes it towards good performance on the test set Diest.\nThus to match test time conditions, when considering each dataset D \u20ac Ymeta\u2014train, the trainins\nobjective we use is the loss Lies: of the produced classifier on D\u2019s test set Dies:. While iteratins\nover the examples in D\u2019s training set Dj;ain, at each time step t the LSTM meta-learner receive:\n(Vo,_,\u00a3t, \u00a32) from the learner (the classifier) and proposes the new set of parameters 6,. Th\nprocess repeats for T\u2019 steps, after which the classifier and its final parameters are evaluated on the\ntest set to produce the loss that is then used to train the meta-learner. The training algorithm is\ndescribed in Algorithm|T]and the corresponding computational graph is shown in Figure [2]"}, {"section_index": "7", "section_name": "3.3.1 GRADIENT INDEPENDENCE ASSUMPTION", "section_text": "Notice that our formulation would imply that the losses \u00a3; and gradients Vo,_, \u00a3; of the learner are\ndependent on the parameters of the meta-learner. Gradients on the meta-learner\u2019s parameters shoul\nnormally take this dependency into account. However, as discussed by\nthis complicates the computation of the meta-learner\u2019s gradients. Thus, following |Andr\net al. (2016), we make the simplifying assumption that these contributions to the gradients\nimportant and can be ignored, which allows us to avoid taking second derivatives, a considerably\nexpensive operation. We were still able to train the meta-learner effectively in spite of this simplify\ning assumption.\nthe meta-learner is training). Learning this initial value lets the meta-learner determine the optimal\ninitial weights of the learner so that training begins from a beneficial starting point that allows\noptimization to proceed rapidly. Lastly, note that though the meta-learner\u2019s update rule matches the\ncell state update of the LSTM, the meta-learner also bears similarity to the GRU\nhidden state update, with the exception that the forget and input gates aren\u2019t tied to sum to one.\nThis preprocessing adjusts the scaling of gradients and losses, while also separating the information\nabout their magnitude and their sign (the latter being mostly useful for gradients). We found that the\nsuggested value of p = 10 in the above formula worked well in our experiments.\nOry Ya) (Ka \u00a5a) (sa)\n\nMeta-| a\nFigure 2: Computational graph for the forward pass of the meta-learner. The dashed line divide:\nexamples from the training set Dj,ain and test set Dies:. Each (Xj, Y;) is the i\u201d batch from th\ntraining set whereas (X, Y) is all the elements from the test set. The dashed arrows indicate that we\ndo not back-propagate through that step when training the meta-learner. We refer to the learner a\nM, where M(X; ) is the output of learner M using parameters 0 for inputs X. We also use V; a:\na shorthand for Vo,_, Le.\nWhen training LSTMs, it is advised to initialize the LSTM with small random weights and to set the\nforget gate bias to a large value so that the forget gate is initialized to be close to 1, thus enabling\ngradient flow (Zarembal 2015). In addition to the forget gate bias setting, we found that we needed\nto initialize the input gate bias to be small so that the input gate value (and thus the learning rate)\nused by the meta-learner LSTM starts out being small. With this combined initialization, the meta-\nlearner starts close to normal gradient descent with a small learning rate, which helps initial stability\n\nof training.\nBatch Normalization (Ioffe & Szegedy||2015) is a recently proposed method to stabilize and thus\n\nspeed up learning of deep neural networks by reducing internal covariate shift within the learner\u2019s\nhidden layers. This reduction is achieved by normalizing each layer\u2019s pre-activation, by subtracting\nby the mean and dividing by the standard deviation. During training, the mean and standard devi-\nation are estimated using the current batch being trained on, whereas during evaluation a running\naverage of both statistics calculated on the training set is used. We need to be careful with batch\nnormalization for the learner network in the meta-learning setting, because we do not want to collect\nmean and standard deviation statistics during meta-testing in a way that allows information to leak\nbetween different datasets (episodes), being considered. One easy way to prevent this issue is to not\ncollect statistics at all during the meta-testing phase, but just use our running averages from meta-\ntraining. This, however, has a bad impact on performance, because we have changed meta-training\nand meta-testing conditions, causing the meta-learner to learn a method of optimization that relies\non batch statistics which it now does not have at meta-testing time. In order to keep the two phases\nas similar as possible, we found that a better strategy was to collect statistics for each dataset D \u20ac F\nduring Aneta\u2014test, but then erase the running statistics when we consider the next dataset. Thus,\nduring meta-training, we use batch statistics for both the training and testing set whereas during\nmeta-testing, we use batch statistics for the training set (and to compute our running averages) but\nthen use the running averages during testing. This does not cause any information to leak between\ndifferent datasets, but also allows the meta-learner to be trained on conditions that are matched be-\ntween training and testing. Lastly, because we are doing very few training steps, we computed the\nrunning averages so that higher preference is given to the later values.\nInput: Meta-training set Dpeta\u2014train, Learner M with parameters 6, Meta-Learner R with\nparameters QO.\nWhile this work falls within the broad literature of transfer learning in general, we focus here on\npositioning it relative to previous work on meta-learning and few-shot learning."}, {"section_index": "8", "section_name": "4.1 META-LEARNING", "section_text": "Meta-learning has a long history, but has grown to prominence recently as many have advocate\nfor it as a key to achieving human-level intelligence in the future (Lake et al.||2016). The ability t\nlearn at two levels (learning within each task presented, while accumulating knowledge about thi\nsimilarities and differences between tasks) is seen as being crucial to improving AI. Previous worl\nhas used a variety of techniques in the meta-learning setting.\nexplored using networks that learn how to modify their own weights over\na number of computations steps on the input. The updating of the weights is defined in a parametric\nform that allows the prediction and weight-change process to be differentiable end-to-end. The\nwork of [Bengio et al.|(1990} 1995) and |Bengio| (1993) considered learning update rules for neural\nnetworks that are biologically plausible. This property is enforced by allowing the parametric form\nof the update to only have as input local information at each hidden unit to determine the weight\nchange. Different optimization methods, such as genetic programming or simulated annealing, are\nused to train the learning rule.\nIn|Santoro et al. , a Memory-augmented neural network is trained to learn how to store anc\nretrieve memories to use for each classification task. The work of| (2016) use:\n\nan LSTM to train a neural network; however, they are interested in learning a general optimizatiot\nalgorithm to train neural networks for large-scale classification, whereas we are interested in th\n\nfew-shot learning problem. This work also builds upon [Hochreiter et al.| (2001) and Bose} bot!\nof which used LSTMs to train multi-layer perceptrons to learn on binary classification and time\n\nseries prediction tasks. Another related method is the work of , who trait\na meta-learner to map a training example to the weights of a neural network that is then used t\nclassify future examples from this class; however, unlike our method the classifier network is directly\nproduced rather than being fine-tuned after multiple training steps. Our work also bears similarity\nto[Maclaurin et al.| (2015), who tune the hyperparameters of gradient descent with momentum by\nbackpropagating through the chain of gradient steps to optimize the validation performance.\nThe best performing methods for few-shot learning have been mainly metric learning methods.\nDeep siamese networks (Koch| |2015) train a convolutional network to embed examples so that\nitems in the same class are close while items in different classes are far away, according to some\ndistance metric. Matching networks 6) refine this idea so that training and testing\nconditions match, by defining a differentiable nearest neighbor loss involving the cosine similarities\nof embeddings produced by a convolutional network."}, {"section_index": "9", "section_name": "5 EVALUATION", "section_text": "\u201cor the learner, we use a simple CNN containing 4 convolutional layers, each of which is a 3 x.\n-onvolution with 32 filters, followed by batch normalization, a ReLU non-linearity, and lastly\n2 x 2 max-pooling. The network then has a final linear layer followed by a softmax for the numbe\nof classes being considered. The loss function \u00a3 is the average negative log-probability assigned b\nhe learner to the correct class. For the meta-learner, we use a 2-layer LSTM, where the first layer 1\n1 normal LSTM and the second layer is our modified LSTM meta-learner. The gradients and losse\nwre preprocessed and fed into the first layer LSTM, and the regular gradient coordinates are als\nised by the second layer LSTM to implement the state update rule shown in (ip. At each time stey\nhe learner\u2019s loss and gradient is computed on a batch consisting of the entire training set Dj,-qiy\nyecause we consider training sets with only a total of 5 or 25 examples. We train our LSTM wit\nADAM using a learning rate of 0.001 and with gradient clipping using a value of 0.25."}, {"section_index": "10", "section_name": "5.1 EXPERIMENT RESULTS", "section_text": "The Mini-ImageNet dataset was proposed by (2016) as a benchmark offering th\nchallenges of the complexity of ImageNet images, without requiring the resources and infrastructure\nnecessary to run on the full ImageNet dataset. Because the exact splits used in (2016\nwere not released, we create our own version of the Mini-Imagenet dataset by selecting a randon\n100 classes from ImageNet and picking 600 examples of each class. We use 64, 16, and 20 classe:\nfor training, validation and testing, respectively. We consider 1-shot and 5-shot classification fo\n5 classes. We use 15 examples per class for evaluation in each test set. We compare against twe\n\nbaselines and a recent metric-learning technique, Matching Networks (Vinyals et al.|/2016), whicl\nle} 1\n\nhas achieved state-of-the-art results in few-shot learning. The results are shown in Tab!\nThe first baseline we use is a nearest-neighbor baseline (Baseline-nearest-neighbor), where we first\ntrain a network to classify between all the classes jointly in the original meta-training set. At meta-\ntest time, for each dataset D, we embed all the items in the training set using our trained network\nand then use nearest-neighbor matching among the embedded training examples to classify each test\nexample. The second baseline we use (Baseline-finetune) represents a coarser version of our meta-\nlearner model. As in the first baseline, we start by training a network to classify jointly between all\nclasses in the meta-training set. We then use the meta-validation set to search over SGD hyperpa-\nrameters, where each training set is used to fine-tune the pre-trained network before evaluating on\n\u201cCode can be found athttps://github.com/twitter/meta-learning-lstm\nIn this section, we describe the results of experiments, examining the properties of our model and\ncomparing our method\u2019s performance against different approacheq*] Following] Vinyals et al.|(2016)\nwe consider the k-shot, V-class classification setting where a meta-learner trains on many related\nbut small training sets of k examples for each of N classes. We first split the list of all classes in\nthe data into disjoint sets and assign them to each meta-set of meta-training, meta-validation, and\nmeta-testing. To generate each instance of a k-shot, N-class task dataset D = (Drain, Diest) \u20ac F,\nwe do the following: we first sample N classes from the list of classes corresponding to the meta-set\nwe consider. We then sample k examples from each of those classes. These k examples together\ncompose the training set Di;ain. Then, an additional fixed amount of the rest of the examples are\nsampled to yield a test set Dies. We generally have 15 examples per class in the test sets. When\ntraining the meta-learner, we iterate by sampling these datasets (episodes) repeatedly. For meta-\nvalidation and meta-testing, however, we produce a fixed number of these datasets to evaluate each\nmethod. We produce enough datasets to ensure that the confidence interval of the mean accuracy is\nsmall.\nthe test set. We use a fixed number of updates for fine tuning and search over the learning rate anc\nlearning rate decay used during the course of these updates.\nFor our meta-learner, we train different models for the 1-shot and 5-shot tasks, that make 12 anc\n5 updates, respectively. We noticed that better performance for each task was attained if the meta.\nlearner is explicitly trained to do the set number of updates during meta-training that will be usec\nduring meta-testing.\nWe attain results that are much better than the baselines discussed and competitive with Matching\nNetworks. For 5-shot, we are able to do much better than Matching Networks, whereas for 1-shot\nthe confidence interval for our performance intersects the interval for Matching Networks. Again\nwe note that the numbers do not match the ones provided by|Vinyals et al.| simply because we\ncreated our version of the dataset and implemented our own versions of their model. It is interesting\nto note that the fine-tuned baseline is worse than the nearest-neighbor baseline. Because we are no\nregularizing the classifier, with very few updates the fine-tuning model overfits, especially in the\n1-shot case. This propensity to overfit speaks to the benefit of meta-training the initialization of the\nclassifier end-to-end as is done in the meta-learning LSTM.\nWe also visualize the optimization strategy learned by the meta-learner, in Figure [3] We can lool\nat the i, and f; gate values in Equation[2]at each update step, to try to get an understanding of hov\nthe meta-learner updates the learner during training. We visualize the gate values while trainin;\non different datasets D;,in, to observe whether there are variations between training sets. We\nconsider both 1-shot and 5-shot classification settings, where the meta-learner is making 10 and !\nupdates, respectively. For the forget gate values for both tasks, the meta-learner seems to adopt\nsimple weight decay strategy that seems consistent across different layers. The input gate value:\nare harder to interpret to glean the meta-learner\u2019s strategy. However, there seems to a be a lot o\nvariability between different datasets, indicating that the meta-learner isn\u2019t simply learning a fixec\noptimization strategy. Additionally, there seem to be differences between the two tasks, suggestins\nthat the meta-learner has adopted different methods to deal with the different conditions of eacl\nsetting.\nTable 1: Average classification accuracies on Mini-ImageNet with 95% confidence intervals.\nMarked in bold are the best results for each scenario, as well as other results with an overlapping\nconfidence interval.\nFor Matching Networks, we implemented our own version of both the basic and the fully-conditional\n>mbedding (FCE) versions. In the basic version, a convolutional network is trained to learn indepen-\nJent embeddings for examples in the training and test set. In the FCE version, a bidirectional-LSTM\nis used to learn an embedding for the training set such that each training example\u2019s embedding is\nilso a function of all the other training examples. Additionally, an attention-LSTM is used so that\n1 test example embedding is also a function of all the embeddings of the training set. We do not\nconsider fine-tuning the network using the train set during meta-testing to improve performance as\nmentioned in{Vinyals et al.|(2016), but do note that our meta-learner could also be fine-tuned using\nhis data. Note that to remain consistent with [Vinyals et al (2016), our baseline and matching net\n-onvolutional networks have 4 layers each with 64 filters. We also added dropout to each convolu-\nional block in matching nets to prevent overfitting.\nLayer?\n\nLayer 3\n\u2018\u2014\n\nLayers\n\n(a) Forget gate values for 1-shot meta-learner\n\nLayer 1\n\n3\nLayer 2\n\n3 4 5\nLayer 3\n\n\u2014\u2014\u2014\n\nBy 2 4 3\n\n3\nLayer 4\n\n3\nLayer 5\n\nT 2 3 ry 5\n\n(c) Forget gate values for 5-shot meta-learner\n\n(b) Input gate values for 1-shot meta-learner\nLayer 1\n\n3\nLayer 5\n\nT 2 3 ry\n\n(d) Input gate values for 5-shot meta-learner"}, {"section_index": "11", "section_name": "6 CONCLUSION", "section_text": "We described an LSTM-based model for meta-learning, which is inspired from the parameter up:\ndates suggested by gradient descent optimization algorithms. Our LSTM meta-learner uses its state\nto represent the learning updates of the parameters of a classifier. It is trained to discover both <\ngood initialization for the learner\u2019s parameters, as well as a successful mechanism for updating the\nlearner\u2019s parameters to a given small training set for some new classification task. Our experiment:\ndemonstrate that our approach outperforms natural baselines and is competitive to the state-of-the:\nart in metric learning for few-shot learning.\nIn this work, we focused our study to the few-shot and few-classes setting. However, it would be\nmore valuable to train meta-learners that can perform well across a full spectrum of settings, i.e. for\n\nfew or lots of training examples and for few or lots of possible classes. Our future work will thu:\nconsider moving towards this more challenging scenario.\nWe thank Jake Snell, Kevin Swersky, and Oriol Vinyals for helpful discussions of this work."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Figure 3: Visualization of the input and forget values output by the meta-learner during the course\nof its updates. Layers 1 \u2014 4 represent the values for a randomly selected parameter from the 4\nconvolutional layers and layer 5 represents the values for a random parameter from fully-connected\nlayer. The different curves represent training steps on different datasets.\nMarcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom\nSchaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. CoRR,\n\nabs/1606.04474. 2016. URL/httv: //arxiv.ora/abs/1606.04474)\nSamy Bengio, Yoshua Bengio, and Jocelyn Cloutier. On the search for new learning rules for ANNs\nNeural Processing Letters. 2(4):26\u201430. 1995.\nYoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. Universit\u00e9\nde Montr\u00e9al, D\u00e9partement d\u2019 informatique et de recherche op\u00e9rationnelle, 1990.\nTom Bosc. Learning to learn neural network:\n\u2018ich Caruana. Learning many related tasks at the same time with backpropagation. Advances i\nneural information processing systems, pp. 657-664. 1995.\nJeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevo\nDarrell. Decaf: A deep convolutional activation feature for generic visual recognition. CoRR\n\nabs/1310.1531, 2013. URL http: //arxiv.org/abs/1310.1531\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-\nnition. CoRR, abs/1512.03385, 2015. URLihttp: //arxiv.org/abs/1512.03385|\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 9(8):\n1735-1780, 1997.\nSepp Hochreiter, A. Steven Younger, and Peter R. Conwell. Learning to learn using gradient de-\nscent. In JIN LECTURE NOTES ON COMP. SCI. 2130, PROC. INTL. CONF. ON ARTI NEURAL\nNETWORKS (ICANN-2001. pp. 87\u201494. Springer. 2001.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR\nabs/1412.6980, 2014. URLihttp: //arxiv.org/abs/1412.6980\nGregory Koch. Siamese neural networks for one-shot image recognition. PhD thesis, University of\nToronto, 2015.\nYurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2)\n1983.\nYoshua Bengio et al. Deep learning of representations for unsupervised and transfer learning. ICMI\nUnsupervised and Transfer Learning. 27:17\u201436. 2012.\nLuca Bertinetto, Joao F. Henriques, Jack Valmadre, Philip H. S. Torr, and Andrea Vedaldi. Learning\n\nfeed-forward one-shot learners. CoRR, abs/1606.05233, 2016. URL\nabs/1606.05233\nKyunghyun Cho, Bart van Merrienboer, Caglar Giilgehre, Fethi Bougares, Holger Schwenk, and\nYoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical ma-\n\nchine translation. CoRR, abs/1406.1078, 2014. URL |http://arxiv.org/abs/1406.\n\n1078\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and\nstochastic optimization. J. Mach. Learn. Res., 12:2121-2159, July 2011. ISSN 1532-4435. URL\nhttp://dl.acm.org/citation.cfm?id=1953048.2021068\nSergey loffe and Christian Szegedy. Batch normalization: Accelerating deep network training by\n\nreducing internal covariate shift. CoRR, abs/1502.03167, 2015. URL\n\nabs/1502.03167\nAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves,\nNal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for\nraw audio. arXiv preprint arXiv: 1609.03499, 2016.\nJiirgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurren\nnetworks. Neural Computation, 4(1):131-139, 1992.\nJiirgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Shifting inductive bias with success-story\nalgorithm, adaptive levin search, and incremental self-improvement. Machine Learning, 28(1):\n105-130, 1997.\nSebastian Thrun. Lifelong learning algorithms. In Learning to learn, pp. 181-209. Springer, 1998\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey\nMaxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google\u2019s neural machine trans\nlation system: Bridging the gap between human and machine translation. arXiv preprin\narXiv: 1609.08144, 2016.\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep\n\nneural networks? CoRR, abs/1411.1792, 2014. URL|http://arxiv.org/abs/1411.\n\n1792\nWojciech Zaremba. An empirical exploration of recurrent network architectures. 2015."}]
rkEFLFqee
[{"section_index": "0", "section_name": "DECOMPOSING MOTION AND CONTENT FOE\nNATURAL VIDEO SEQUENCE PREDICTION", "section_text": "We propose a deep neural network for the prediction of future frames in natura\nvideo sequences. To effectively handle complex evolution of pixels in videos, we\npropose to decompose the motion and content, two key components generating\ndynamics in videos. Our model is built upon the Encoder-Decoder Convolutional\nNeural Network and Convolutional LSTM for pixel-level prediction, which inde\npendently capture the spatial layout of an image and the corresponding temporal\ndynamics. By independently modeling motion and content, predicting the nex\u2019\nframe reduces to converting the extracted content features into the next frame\ncontent by the identified motion features, which simplifies the task of prediction\nOur model is end-to-end trainable over multiple time steps, and naturally learns tc\ndecompose motion and content without separate training. We evaluate the propose\nnetwork architecture on human activity videos using KTH, Weizmann action, an\nUCF-101 datasets. We show state-of-the-art performance in comparison to recent\napproaches. To the best of our knowledge, this is the first end-to-end trainable net\nwork architecture with motion and content separation to model the spatio-temporal\ndynamics for pixel-level future prediction in natural videos."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Understanding videos has been one of the most important tasks in the field of computer vision.\nCompared to still images, the temporal component of videos provides much richer descriptions of\nthe visual world, such as interaction between objects, human activities, and so on. Amongst the\nvarious tasks applicable on videos, the task of anticipating the future has recently received increased\nattention in the research community. Most prior works in this direction focus on predicting high-level\nsemantics in a video such as action (2014), are\net al.|/2014}|/Walker et al.|/2016). Forecasting semantics provides information about what will happen\nin a video, and is essential to automate decision making. However, the predicted semantics are\noften specific to a particular task and provide only a partial description of the future. Also, training\nsuch models often requires heavily labeled training data which leads to tremendous annotation costs\nespecially with videos.\nIn this work, we aim to address the problem of prediction of future frames in natural video sequences.\nPixel-level predictions provide dense and direct description of the visual world, and existing video\nrecognition models can be adopted on top of the predicted frames to infer various semantics of the\nfuture. Spatio-temporal correlations in videos provide a self-supervision for frame prediction, which\nenables purely unsupervised training of a model by observing raw video frames. Unfortunately,\nestimating frames is an extremely challenging task; not only because of the inherent uncertainty of\nthe future, but also various factors of variation in videos leading to complicated dynamics in raw pixel\n\nvalues. There have been a number of recent attempts on frame prediction (Srivastava et al.||2015\n[Mathieu et al.|[2015}/Oh et al.|/2015}{Goroshin et al.||2015}/Lotter et al.|{2015{/Ranzato et al.||2014\n\u201cThis work was done while SH and XL were visiting the University of Michigan.\nXunyu Lin*\u201d\nHonglak Lee'~"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "which use a single encoder that needs to reason about all the different variations occurring in videos\nin order to make predictions of the future, or require extra information like foreground-background\n\nsegmentation masks and static background (Vondrick et al.|{2016).\nWe propose a Motion-Content Network (MCnet) for robust future frame prediction. Our intuition i\nto split the inputs for video prediction into two easily identifiable groups, motion and content, ant\nindependently capture each information stream with separate encoder pathways. In this architecture\nthe motion pathway encodes the local dynamics of spatial regions, while the content pathway encode\nthe spatial layout of the salient parts of an image. The prediction of the future frame is then achieve\nby transforming the content of the last observed frame given the identified dynamics up to the las\nobservation. Somewhat surprisingly, we show that such a network is end-to-end trainable withou\nindividual path way supervision. Specifically, we show that an asymmetric architecture for the tw\npathways enables such decompositions without explicit supervision. The contributions of this pape\nare summarized below:\nThe problem of visual future prediction has received growing interests in the computer vision\ncommunity. It has led to various tasks depending on the objective of future prediction, such as\nhuman activity (Vondrick et al Por 2014}, event\n(2013) and geometric path (Walker et al.{/2014). Although previous work achieved\nreasonable success in specific tasks, they are often limited to estimating predefined semantics, and\nrequire fully-labeled training data. To alleviate this issue, approaches predicting representation of\nthe future beyond semantic labels have been proposed. proposed a data-driven\napproach to predict the motion of a moving object, and coarse hallucination of the predicted motion\n(2015) proposed a deep regression network to predict feature representations of the\n\nuture frames. These approaches are supervised and provide coarse predictions of how the future will\nlook like. Our work also focuses on unsupervised learning for prediction of the future, but to a more\ndirect visual prediction task: frame prediction.\nCompared to predicting semantics, pixel-level prediction has been less investigated due to the\ndifficulties in modeling evolution of raw pixels over time. Fortunately, recent advances in deep\nlearning provide a powerful tool for sequence modeling, and enable the creation of novel architectures\nfor modeling complex sequential data. [Ranzato et al.| applied a recurrent neural networ!\ndeveloped for language modeling to frame prediction by posing the task as classification of each\nimage region to one of quantized patch dictionaries. [Srivastava et al.|(2015) applied a sequence-to\nsequence model to video prediction, and showed that Long Short-Term Memory (LSTM) is able to\ncapture pixel dynamics. |Oh et al.|(2015) proposed an action-conditional encoder-decoder networ\nto predict future frames in Atari games. In addition to the different choices of architecture, some\nother works addressed the importance of selecting right objective function: used\nadversarial loss with combined CNN and LSTM architectures, and mployed\nsimilar adversarial loss with additional regularization using a multi-scale encoder-decoder network\nFinn et al. 6) constructed a network that predicts transformations on the input pixels for next\nframe prediction. (2015) proposed a network that by explicitly predicting optical\nflow features is able to predict the next frame in a video. proposed a generative\nadversarial network for video which. by veneratine a backsround-foresround mask. is able to senerate\ne We propose MCnet for the task of frame prediction, which separates the information streams\n(motion and content) into different encoder pathways.\n\ne The proposed network is end-to-end trainable and naturally learns to decompose motion and\ncontent without separate training, and reduces the task of frame prediction to transforming\nthe last observed frame into the next by the observed motion.\n\ne We evaluate the proposed model on challenging real-world video datasets, and show that it\noutnerforme nrevions annroaches on frame nrediction\nThe rest of the paper is organized as follows. We briefly review related work in Section |2| and\nintroduce an overview of the proposed algorithm in Section [3] The detailed configuration of the\nproposed network is described in Section|4] Section|5|describes training and inference procedure.\nSection |6Jillustrates implementation details and experimental results on challenging benchmarks.\nrealistic-looking video sequences. However, none of the previously mentioned approaches exploit\nspatial and temporal information separately in an unsupervised fashion. In terms of the way data\nis observed, the closest work to ours is [Xue et al.] (2016). The differences are (1) Our model is\ndeterministic and theirs is probabilistic, (2) our motion encoder is based on convolutional LSTM (Shi\nfet al.| 2015) which is a more natural module to model long-term dynamics, (3) our content encoder\nobserves a single scale input and theirs observes many scales, and (4) we directly generate image\npixels values, which is a more complicated task. We aim to exploit the existing spatio-temporal\ncorrelations in videos by decomposing the motion and content in our network architecture."}, {"section_index": "3", "section_name": "3 ALGORITHM OVERVIEW", "section_text": "In this section, we formally define the task of frame prediction and the role of each component in the\nproposed architecture. Let x, \u20ac R\u201d*\"*\u00b0 denote the t-th frame in an input video x, where w, h, and\nc denote width, height, and number of channels, respectively. The objective of frame prediction is to\ngenerate the future frame X;., given the input frames x,.;.\nAt the \u00a2-th time step, our network observes a history of previous consecutive frames up to frame\nand generates the prediction of the next frame xX; as follows:\nThe overall architecture of the proposed algorithm is described in Figure[f] The prediction of multiple\nframes, X141:14-7, can be achieved by recursively performing the above procedures over T time steps\n(Section|5). Each component in the proposed architecture is described in the following section."}, {"section_index": "4", "section_name": "4 ARCHITECTURE", "section_text": "This section describes the detailed configuration of the proposed architecture, including the two\nencoder pathways, multi-scale residual connections, combination layers, and decoder."}, {"section_index": "5", "section_name": "4.1 MOTION ENCODER", "section_text": "To the best of our knowledge, the idea of separating motion and content has not been investigated in\nthe task of unsupervised deterministic frame prediction. The proposed architecture shares similarities\nto the two-stream CNN , which is designed for action recognition to\njointly exploit the information from frames and their temporal dynamics. However, in contrast to\ntheir network we aim to learn features for temporal dynamics directly from the raw pixels, and we\nuse the identified features from the motion in combination with spatial features to make pixel-level\npredictions of the future.\ne Motion Encoder recurrently takes an image difference input between frame x, and x;_,\nstarting from t = 2, and produces the hidden representation d,; encoding the temporal\ndynamics of the scene components (Section|4.1.\n\ne Content Encoder takes the last observed frame x; as an input, and outputs the hidden\nrepresentation s, that encodes the spatial layout of the scene (Section[4.2).\n\ne Multi-Scale Motion-Content Residual takes the computed features, from both the motion\nand content encoders, at every scale right before pooling and computes residuals r; (He et al\nto aid the information loss caused by pooling in the encoding phase (Section]4.3).\n\ne Combination Layers and Decoder takes the outputs from both encoder pathways and\nresidual connections, d;, s;, and r;, and combines them to produce a pixel-level prediction\nof the next frame X;+1 (Section|\nThe motion encoder captures the temporal dynamics of the scene\u2019s components by recurrently\nobserving subsequent difference images computed from x;_1 and x;, and outputs motion features by\n[d;, cz] = f\u00b0 (xp \u2014 x41, de_1, Cx-1),\nrather than complicated global motion. For this, we use an encoder CNN with a Convolutional LSTN\n\n(2015) layer on top."}, {"section_index": "6", "section_name": "4.2 CONTENT ENCODER", "section_text": "s, = fo\" (x),\nr= fe ([st,d!])\u2019,"}, {"section_index": "7", "section_name": "4.4 COMBINATION LAYERS AND DECODER", "section_text": "The outputs from the two encoder pathways, d, and s;, encode a high-level representation of motion\nand content, respectively. Given these representations, the objective of the decoder is to generate a\nMotion Encoder\n\nCombination\nlayers\n\nDecoder\n\nMotion Encoder\n\nMulti-scale\n\n[\\ J]\n\n=Conv\n\n=Deconv\n\nCombination\nlayers\n\nDecoder\n\nContent Encoder Multicscale\n\nContent Residual\n(b) MCnet with Multi-scale Motion-Content Residuals\nFigure 1: Overall architecture of the proposed network. (a) illustrates MCnet without the Motion-\nContent Residual skip connections, and (b) illustrates MCnet with such connections. Our network\nobserves a history of image differences through the motion encoder and last observed image through\nthe content encoder. Subsequently, our network proceeds to compute motion-content features and\ncommunicates them to the decoder for the prediction of the next frame.\nThe content encoder extracts important spatial features from a single frame, such as the spatial layout\nof the scene and salient objects in a video. Specifically, it takes the last observed frame x, as an input,\nand produces content features by\nIt is important to note that our model employs an asymmetric architecture for the motion and content\nencoder. The content encoder takes the last observed frame, which keeps the most critical clue to\nreconstruct spatial layout of near future, but has no information about dynamics. On the other hand,\nthe motion encoder takes a history of previous image differences, which are less informative about\nthe future spatial layout compared to the last observed frame, yet contain important spatio-temporal\nvariations occurring over time. This asymmetric architecture encourages encoders to exploit each of\ntwo pieces of critical information to predict the future content and motion individually, and enables\nthe model to learn motion and content decomposition naturally without any supervision.\nTo prevent information loss after the pooling operations in our motion and content encoders, we\n\nuse residual connections (He et al.}[2015). The residual connections in our network communicate\nmotion-content features at every scale into the decoder layers after unpooling operations. The residual\nfeature at laver ] ic cQgmmnited hv\npixel-level prediction of the next frame X41 \u20ac R\u201d*\"*\u00b0. To this end, it first combines the motio'\nand content back into a unified representation by\nwhere [d;, s;] \u20ac Rw x\u2019*2c\" denotes the concatenation of the higher-level motion and content features\nin the depth dimension, and f; \u20ac Rw\u2019 xh\u2019xe\" denotes the combined high-level representation of motion\nand content. g\u00b0\u00b0\u2122> is implemented by a CNN with bottleneck layers (Hinton and Salakhutdinov\n\n; it first projects both d; and s, into a lower-dimensional embedding space, and then puts i\nback to the original size to construct the combined feature f;. Intuitively, f, can be viewed as the\n\ncontent feature of the next time step, s;41, which is generated by transforming s, using the observed\ndynamics encoded in d;. Then our decoder places f; back into the original pixel space by\nKia = 9 (f,,11),"}, {"section_index": "8", "section_name": "5.1 MULTI-STEP PREDICTION", "section_text": "Given an input video, our network observes the first n frames as image difference between frame x;\nand x;_, starting from t = 2 up to t = n, to encode initial temporal dynamics through the motion\nencoder. The last frame x,, is given to the content encoder to be transformed into the first prediction\n141 by the identified motion features.\nFor each time step t \u20ac [n + 1,n + T], where T is the desired number of prediction steps, our network\ntakes the difference image between the first prediction X,,1 and the previous image x,, and the first\nprediction X;41 itself to predict the next frame X;+.2, and so forth.\nL=aLimg + BLGaN,\nLime = Lp (Xt+k, Xt+h) + Lagat (Xt+k, Xt+k)\nf, = 9\n\ncomb ([de. si]),\nwhere r; is a list containing the residual connections from every layer of the motion and content\nencoders before pooling sent to every layer of the decoder after unpooling. We employ the decon-\nvolution network (eller al 2011) fr our decoder network g*\u00b0, which is composed of multiple\nsuccessive operations of deconvolution, rectification and unpooling with the addition of the motion-\n\ncontent residual connections after each unpooling operation. The output layer is passed through a\n\ntanh (.) activation function. Unpooling with fixed switches are used to upsample the intermediate\nactivation maps.\nSection [4] describes the procedures for single frame prediction, while this section presents the\nextension of our algorithm for the prediction of multiple time steps.\nT\nhere Ly (y,z) =)> lly \u20142i[p,\nk=1\n\nhw\n\nXr\nLaat (\u00a5,2) = 30 | (vig \u2014 \u00a5i-1sl \u2014 ea \u2014 2-13) |\n\nXr\n+ | (lyig-a \u2014 Vig \u2014 |aig-1 \u2014 Big) |.\nHere, x,4; and X,4, are the target and predicted frames, respectively, and p and X are hyper-\nparameters for \u00a3,, and \u00a3,,,;, respectively. Intuitively, \u00a3,, guides our network to match the average pixel\nvalues directly, while \u00a34; guides our network to match the gradients of such pixel values. Overall,\nLimg guides our network to learn parameters towards generating the correct average sequence given\nthe input. Training to generate average sequences, however, results in somewhat blurry generations\nwhich is the reason we use an additional sub-loss. Lgan is the generator loss in adversarial training\nto allow our model to predict realistic looking frames and it is defined by\nwhere xj. is the concatenation of the input images, x;41..7 is the concatenation of the ground-truth\nfuture images, G (x1:1) = X\u00a241..47 is the concatenation of all predicted images along the depth\ndimension, and D (.) is the discriminator in adversarial training. The discriminative loss in adversarial\ntraining is defined by\nLise = \u2014 log D ([X14,Xt41-147]) \u2014 log (1 \u2014 D ([x14, G (x14)])).\nLean, in addition to Limg, allows our network to not only generate the target sequence, but also\nsimultaneously enforce realism in the images through visual sharpness that fools the human eye.\nNote that our model uses its predictions as input for the next time-step during the training, which\nenables the gradients to flow through time and makes the network robust for error propagation during\nprediction. For more a detailed description about adversarial training, please refer to Appendix |D}"}, {"section_index": "9", "section_name": "6 EXPERIMENTS", "section_text": "In this section, we present experiments using our network for video generation. We first evaluate\nour network, MCnet, on the KTH and Weizmann action {\ndatasets, and compare against a baseline convolutional LSTM (ConvLSTM)\nthen proceed to evaluate on the more challenging UCF-101 (2012) dataset, in which\n\nwe compare against the same ConvLSTM baseline and also the current state-of-the-art method\n\nby (2015). For all our experiments, we use a = 1, \\ = 1, and p = 2 in the loss\n\nfunctions.\nArchitectures. The content encoder of MCnet is built with the same architecture as VGG16\nmonyan and Zisserman}|2015) up to the third pooling layer. The motion encoder of MCnet is also\nsimilar to VGG16 up to the third pooling layer, except that we replace its consecutive 3x3 convolu-\ntions with single 5x5, 5x5, and 7x7 convolutions in each layer. The combination layers are composed\nof 3 consecutive 3x3 convolutions (256, 128, and 256 channels in each layer). The multi-scale\nresiduals are composed of 2 consecutive 3x3 convolutions. The decoder is the mirrored architecture\nof the content encoder where we perform unpooling followed by deconvolution. For the baseline\nConvLSTM, we use the same architecture as the motion encoder, residual connections, and decoder,\nexcept we increase the number of channels in the encoder in order to have an overall comparable\nnumber of parameters with MCnet."}, {"section_index": "10", "section_name": "6.1 KTH AND WEIZMANN ACTION DATASETS", "section_text": "Experimental settings. The KTH human action dataset (Schuldt et al.|/2004) contains 6 categories\nof periodic motions on a simple background: running, jogging, walking, boxing, hand-clapping\n\nand hand-waiving. We use person 1-16 for training and 17-25 for testing, and also resize frames to\n128x128 pixels. We train our network and baseline by observing 10 frames and predicting 10 frames\ninto the future on the KTH dataset. We set 3 = 0.02 for training. We also select the walking, running\none-hand waving, and two-hands waving sequences from the Weizmann action dataset (Gorelick\n\nfor testing the networks\u2019 generalizability.\nFor all the experiments, we test the networks on predicting 20 time steps into the future. As for\nevaluation, we use the same SSIM and PSNR metrics as in [Mathieu et al.|(2015). The evaluation\non KTH was performed on sub-clips within each video in the testset. We sample sub-clips every\n3 frames for running and jogging, and sample sub-clips every 20 frames (skipping the frames we\nhave already predicted) for walking, boxing, hand-clapping, and hand-waving. Sub-clips for running\njogging, and walking were manually trimmed to ensure humans are always present in the frames\nThe evaluation on Weizmann was performed on all sub-clips in the selected sequences.\nIn addition to the results in this section, we also provide more qualitative comparisons in the\nsupplementary material and in the videos on the project website: |https://sites.google\nPeak Signal to Noise Ratio\n\nStructural Similarity\n\n22\n\nKTH\n\nWm Conv STM\nWe Conv LSTM + RES\n@1@ MCnet\n\nme MCnet + RES\n\ney,\nOQ gig\n\nPeak Signal to Noise Ratio\n\n38\n\n36\n\nWeizmann\n\nHm Conv LSTM + RES\n\nqos MCnet + RES\n\n20\n\n1.0\n\n6\n\n7\n\n3\n\n9 10 11 12 13\ntime steps\n\ni415 16 17 18 19 20\n\nHB Conv STM\n\nem Conv LSTM + RES\n@s@ MCnet\n\n@as9 MCnet + RES\n\nStructural Similarity\n\n1.0\n\n6\n\n7\n\n3\n\n9 10 11 12 13 14 15 16 17 18 19 20\n\ntime steps\n\n(ie Conv LSTM + RES\nq@esp MCnet + RES\n\n9 10 11 12 13 14 15 16 17 18 19 20\n\ntime steps\n\n6\n\n7\n\n3\n\n9 10 11 12 13\ntime steps\n\ni415 16 17 18 19 20\nFigure 2: Quantitative comparison between MCnet and ConvLSTM baseline with and without multi\nscale residual connections (indicated by \"+ RES\"). Given 10 input frames, the models predict 2(\n\nframes recursively, one by one. Left column: evaluation on KTH dataset (Schuldt et al.|{2004). Righ\ncolum: evaluation on Weizmann (Gorelick et al.||2007) dataset.\nResults. Figure[2|summarizes the quantitative comparisons among our MCnet, ConvLSTM baseline\nand their residual variations. In the KTH test set, our network outperforms the ConvLSTM baseline\nby a small margin. However, when we test the residual versions of MCnet and ConvLSTM on the\ndataset with similar motions, we can see that our network can generalize\nwell to the unseen contents by showing clear improvements, especially in long-term prediction. One\nreason for this result is that the test and training partitions of the KTH dataset have simple and simila\nimage contents so that ConvLSTM can memorize the average background and human appearance tc\nmake reasonable predictions. However, when tested on unseen data, ConvLSTM has to internally\ntake care of both scene dynamics and image contents in a mingled representation, which gives it <\nhard time for generalization. In contrast, the reason our network outperforms the ConvLSTM baseline\non unseen data is that our network focuses on identifying general motion features and applying ther\nto a learned content representation.\nFigure |3] presents qualitative results of multi-step prediction by our network and ConvLSTM. As\nexpected, prediction results by our full architecture preserves human shapes more accurately than the\nbaseline. It is worth noticing that our network produces very sharp prediction over long-term time\nsteps; it shows that MCnet is able to capture periodic motion cycles, which reduces the uncertainty o:\nfuture prediction significantly. More qualitative comparisons are shown in the supplementary materia\n\nand the/project website}\nExperimental settings. This section presents results on the challenging real-world videos in the\nUCF-101 dataset. Having collected from YouTube, the dataset contains 101\nrealistic human actions taken in a wild and exhibits various challenges, such as background clutter,\nocclusion, and complicated motion. We employed the same network architecture as in the KTH\ndataset, but resized frames to 240x320 pixels, and trained the network to observe 4 frames and predict\na single frame. We set 3 = 0.001 for training. We also trained our convolutional LSTM baseline\n\nin the same way. Following the same protocol as (2015) for data pre-processing and\n=30\n\nt\n\nt=27\n\nt=24\n\nLod id hl\n\n=21\n\nt\n\n=18\n\nt\n\n=15\n\nt\n\n=12\n\nt\n\n'\n\n3\n\u2014\nUDI : WLS TAu0)\n\nerrr\n\nhed\n\n4\n\nLo\n\nJogging\n\nPUDIN WLS TAU0D\n\nWalking\nFigure 3: Qualitative comparison between our MCNet model and ConvLSTM. We display predictions\nstarting from the 12\" frame, in every 3 timesteps. The first 3 rows correspond to KTH dataset for the\naction of jogging and the last 3 rows correspond to Weizmann dataset for the action of walking.\nvaluation metrics on full images, all networks were trained on Sports-1M (Karpathy et al. 201\n\nlataset and tested on UCF-101 unless otherwise stated[]\nResults. Figure|4|shows the quantitative comparisons between our network trained for single-step\nprediction and 5). We can clearly see the advantage of our network over th\nbaseline. The separation of motion and contents in two encoder pathways allows our network t\nidentify key motion and content features, which are then fed into the decoder to yield prediction:\nof higher quality compared to the baseline[?] In other words, our network only moves what show:\nmotion in the past, and leaves the rest untouched. We also trained a residual version of MCnet o1\nUCF-101, indicated by \u201cMCnet + RES UCF101\", to compare how well our model generalizes whet\nrained and tested on the same or different dataset(s). To our surprise, when tested with UCF-101, th\nMCnet trained on Sports-1M (MCnet + RES) roughly matches the performance of the MCnet traine<\non UCF-101 (MCnet + RES UCF101), which suggests that our model learns effective representation:\nwhich can generalize to new datasets. Figure|5|presents qualitative comparisons between frame:\ngenerated by our network and {Mathieu et al.|(2015). Since the ConvLSTM and/Mathieu et al.| (2015\nlack explicit motion and content modules, they lose sense of the dynamics in the video and therefor\nthe contents become distorted quickly. More qualitative comparisons are shown in the supplementar\n\nmaterial and the project website\n'We use the code and model released by|Mathieu et al. (2015) atlhttps://github.com/coupriec,\nVideoPredictionICLR2016 7\n\u201cWe were not able to get the model fine-tuned on UCF-101 from the authors so it is not included in Figure|4\nPeak Signal to Noise Ratio\n\n[EE Convist\n|e Conv LSTM + RES\n\nlore mcnet\n\n}@=@ MCnet + RES\n\n@@ MCnet + RES UCFLO1\n\n[> Matheiu et al\n\n30\n\n25\n\n20\n\n15\n\n10\n\n[mm Conv isTM\nIH Conv LSTM + RES.\n\ntime steps\n\n>? lore mcnet\n= l@m@ Mcnet + RES\nB08 @ MCnet + RES UCFLO1\n= Matheiu et al\nHo\ng\n5 os\nv\nPos\na\n\n04\n\n1 2 3 4 5 6 7 3\ntime steps\nFigure 4: Quantitative comparison between our model, convolutional LSTM 2015), and\nMathieu et al.](2015). Given 4 input frames, the models predict 8 frames recursively, one by one."}, {"section_index": "11", "section_name": "7 CONCLUSION", "section_text": "This work was supported in part by ONR N00014-13-1-0762, NSF CAREER IIS-1453651, gifts from\nthe Bosch Research and Technology Center, and Sloan Research Fellowship. We also thank NVIDIA\nfor donating K40c and TITAN X GPUs. We thank Ye Liu, Junhyuk Oh, Xinchen Yan, Lajanugen\nLogeswaran, Yuting Zhang, Sungryull Sohn, Kibok Lee, Rui Zhang, and other collaborators for\nhelpful discussions. R. Villegas was partly supported by the Rackham Merit Fellowship.\nR. Goroshin, M. Mathieu, and Y. LeCun. Learning to linearize under uncertainty. In NJPS. 2015\nM. Hoai and F. Torre. Max-margin early event detectors. LJCV, 2013.\nWe proposed a motion-content network for pixel-level prediction of future frames in natural video\nsequences. The proposed model employs two separate encoding pathways, and learns to decompose\nmotion and content without explicit constraints or separate training. Experimental results suggest that\nseparate modeling of motion and content improves the quality of the pixel-level future prediction, and\nour model overall achieves state-of-the-art performance in predicting future frames in challenging\nreal-world video datasets.\n\u00a9. Finn, I. J. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through\nvideo prediction. In NIPS, 2016.\n\n. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and\nY. Bengio. Generative adversarial nets. In NJPS. 2014.\n\nL. Gorelick, M. Blank, E. Shechtman, M. Irani, and R. Basri. Actions as space-time shapes\nTransactions on Pattern Analysis and Machine Intelligence, 29(12):2247\u20142253, December 2007.\n. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR\nabs/1512.03385, 2015.\n\n. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science\n2006.\nClassification with convolutional neural networks. In CVF A, 2014.\n\n[. Lan, T. Chen, and S. Savarese. A hierarchical representation for future action prediction. In ECCV\n2014.\n\nW. Lotter, G. Kreiman, and D. Cox. Unsupervised learning of visual structure using predictive\nvenerative networks. grXiv preprint arXiv: 1504 08022? 9015.\nMCnet ConvLSTM\nFigure 5: Qualitative comparisons among MCnet and ConvLSTM and|Mathieu et al.|\ndisplay predicted frames (in every other frame) starting from the 5\" frame. The green arrows denote\nthe top-30 closest optical flow vectors within image patches between MCnet and ground-truth. More\nclear motion prediction can be seen in the|project website\npe NBM, fe NIN, She Beg ERs Bee SWE, CERNE Oe UE tte SAN UATE VIMO PICU Modis UN\nnetworks in atari games. In NIPS. 2015.\n\nV. Patraucean, A. Handa, and R. Cipolla. Spatio-temporal video autoencoder with differentia\nmemory. CoRR, abs/1511.06309, 2015.\n\nL. C. Pickup, Z. Pan, D. Wei, Y. Shih, C. Zhang, A. Zisserman, B. Scholkopf, and W. T. Freem\nSeeing the arrow of time. In CVPR, 2014.\n\n5. L. Pintea, J. C. van Gemert, and A. W. M. Smeulders. Dejavu: Motion prediction in static ima;\nIn European Conference on Computer Vision, 2014.\n\nM. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra. Video (langua\nmodeling: a baseline for generative models of natural videos. arXiv preprint arXiv: 1412.66\n2014.\n\nM.S. Ryoo. Human activity prediction: Early recognition of ongoing activities from stream:\nvideos. In CCV, 2011.\n\nC. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local svm approach. In JC:\n2004.\n\nx. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-k. Wong, and W.-c. WOO. Convolutional Istm netw\u00ab\nA machine learning approach for precipitation nowcasting. In Advances in Neural Informati\nProcessing Systems 28. 2015.\n\nK. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in vide\nIn NIPS. 2014.\n\nK. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recogniti\nIn JCLR, 2015.\n\nK. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset of 101 human actions classes from vid\nin the wild. arXiv preprint arXiv:1212.0402, 2012.\n\nN. Srivastava, E. Mansimov, and R. Salakhudinov. Unsupervised learning of video representatic\nusing Istms. In JCML, 2015.\n\n\u00a9. Vondrick, H. Pirsiavash, and A. Torralba. Anticipating the future by watching unlabeled vic\narXiv preprint arXiv: 1504.08023, 2015.\n\n\u00a9. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. In NIPS. 20\n\n|. Walker, A. Gupta , and M. Hebert . Patch to the future: Unsupervised visual prediction. In CV,\n2014.\n\n|. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncertain future: Forecasting from static ima;\nusing variational autoencoders. CoRR, abs/1606.07873, 2016.\n\n?, Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid. DeepFlow: Large displacement optical fl\nwith deep matching. In JCCV, 2013.\n\nlr. Xue, J. Wu, K. L. Bouman, and W. T. Freeman. Visual dynamics: Probabilistic future fra\nsynthesis via cross convolutional networks. N/JPS, 2016.\n\nJ. Yuen and A. Torralba. A data-driven approach for event prediction. In ECCV, 2010.\n\nM.D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high le\nfeature learning. In JCCV, 2011.\nRV AVR ee UNE, RI De RN DP PERE OI VION PEC UC YES BV Odie wii.\n\narXiv preprint arXiv: 1511.05440, 2015.\n\nJ. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deep\nnetworks in atari games. In NIPS. 2015.\n\nV. Patraucean, A. Handa, and R. Cipolla. Spatio-temporal video autoencoder with differentiable\nmemory. CoRR, abs/1511.06309, 2015.\n\nL. C. Pickup, Z. Pan, D. Wei, Y. Shih, C. Zhang, A. Zisserman, B. Scholkopf, and W. T. Freeman.\nSeeing the arrow of time. In CVPR, 2014.\n\nS. L. Pintea, J. C. van Gemert, and A. W. M. Smeulders. Dejavu: Motion prediction in static images.\nIn European Conference on Computer Vision, 2014.\n\nM. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra. Video (language)\nmodeling: a baseline for generative models of natural videos. arXiv preprint arXiv: 1412.6604,\n2014.\n\nM. S. Ryoo. Human activity prediction: Early recognition of ongoing activities from streaming\nvideos. In CCV, 2011.\n\nC. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local svm approach. In JCPR,\n2004.\n\nX. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-k. Wong, and W.-c. WOO. Convolutional Istm network:\nA machine learning approach for precipitation nowcasting. In Advances in Neural Information\nProcessing Systems 28. 2015.\nK. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.\nIn JCLR, 2015.\n\nK. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset of 101 human actions classes from videos\nin the wild. arXiv preprint arXiv:1212.0402, 2012.\n\nN. Srivastava, E. Mansimov, and R. Salakhudinov. Unsupervised learning of video representations\nusing Istms. In JCML, 2015.\n\nC. Vondrick, H. Pirsiavash, and A. Torralba. Anticipating the future by watching unlabeled video.\narXiv preprint arXiv: 1504.08023, 2015.\n\nC. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. In N/PS. 2016.\n\nJ. Walker, A. Gupta , and M. Hebert . Patch to the future: Unsupervised visual prediction. In CVPR,\n2014.\n\nJ. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncertain future: Forecasting from static images\nusing variational autoencoders. CoRR, abs/1606.07873, 2016.\n\nP. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid. DeepFlow: Large displacement optical flow\nwith deep matching. In JCCV, 2013.\n\nT. Xue, J. Wu, K. L. Bouman, and W. T. Freeman. Visual dynamics: Probabilistic future frame\nsynthesis via cross convolutional networks. N/JPS, 2016.\n\nJ. Yuen and A. Torralba. A data-driven approach for event prediction. In ECCV, 2010.\n\nM. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high level\nfeature learning. In JCCV, 2011.\nFigure 6: Qualitative comparisons on KTH testset. We display predictions starting from the 12\"\nframe, for every 3 timesteps. More clear motion prediction can be seen in the|project website\ntented\n\nischie!\n\ndaechsesl\n\nRUD WLSTAU0D\n\nbee\n\n|\nbiel jill\n\nbe\n\nLo\n\nBoxing\n\nwPUDIN WLS TAU0)D\n\nLo\n\nJUD\n\ncoma,\n\n\u201cWLS TAU0D\n\n|\n\nanre Kk: Onalitative ceamnaricane an KTH tectcet We dicnlay nredictiane ctarting fram the 19th\n\nt\n\nLo\n\nbade sf\n\nWalking"}, {"section_index": "12", "section_name": "Walking", "section_text": "Figure 7: Qualitative comparisons on KTH testset. We display predictions starting from the 12\"\nframe, for every 3 timesteps. More clear motion prediction can be seen in the project website\nt=18 t=21\n\nt=12 t=15 t=24 t=27 t=30\n\nHandclapping\n\nConvLSTM = MCnet\n\nConvLSTM = MCnet\nRUIN WLS TAU0D L\nMCnet ConvLSTM Mathieu et al.}(2015)\nFigure 8: Qualitative comparisons on UCF-101. We display predictions (in every other frame) starting\nfrom the 5\" frame. The green arrows denote the top-30 closest optical flow vectors within image\npatches between MCnet and ground-truth. More clear motion prediction can be seen in the project\n\nwel site"}, {"section_index": "13", "section_name": "A QUALITATIVE AND QUANTITATIVE COMPARISON WITH CONSIDERABLE\nCAMERA MOTION AND ANALYSIS", "section_text": "UCF101 Results. As seen in Figur our model handles foreground and camera\nmotion for a few steps. We hypothesize that for the first few steps, motion signals from images are\nclear. However, as images are predicted, motion signals start to deteriorate due to prediction errors.\nWhen a considerable amount of camera motion is present in image sequences, the motion signals\nare very dense. As predictions evolve into the future, our motion encoder has to handle large motion\ndeterioration due to prediction errors, which cause motion signals to get easily confused and lost\nquickly.\nFigure 9: Qualitative comparisons on UCF-101. We display predictions (in every other frame) starting\nfrom the 5\" frame. The green arrows denote the top-30 closest optical flow vectors within image\npatches between MCnet and ground-truth. More clear motion prediction can be seen in the project\n\nwi ite\nIn this section, we show frame prediction examples in which considerable camera motion occurs. We\nanalyze the effects of camera motion on our best network and the corresponding baselines. First,\nwe analyze qualitative examples on UCF101 (more complicated camera motion) and then on KTH\n(zoom-in and zoom-out camera effect).\nConvLSTM Mathieu et al.}(2015)\n\n230 xo\nFigure 10: Qualitative comparisons on UCF-101. We display predictions (in every other frame)\nstarting from the 5\" frame. The green arrows denote the top-30 closest optical flow vectors within\nimage patches between MCnet and ground-truth. More clear motion prediction can be seen in the\n\nproject website\nKTH Results. We were unable to find videos with background motion in the KTH dataset, but we\nfound videos where the camera is zooming in or out for the actions of boxing, handclapping, and\nhandwaving. In Figure[I 1] we display qualitative for such videos. Our model is able to predict the\nzoom change in the cameras, while continuing the action motion. In comparison to the performance\nobserved in UCF101, the background does not change much. Thus, the motion signals are well\nlocalized in the foreground motion (human), and do not get confused with the background and lost as\nquickly.\nFigure 11: Qualitative comparisons on KTH testset. We display predictions starting from the 12\"\nframe, in every 3 timesteps. More clear motion prediction can be seen in the!project website!\n=30\n\nt\n\n=27\n\nt\n\n=24\n\nt\n\n=21\n\nt\n\n=18\n\nt\n\n=15\n\nt\n\n=12\n\nt\n\nBoxing\n\nRUIN WLS TAUOD\n\nHandclapping"}, {"section_index": "14", "section_name": "Boxing", "section_text": "RUIN WLS TAU0D\nIn Figure|12|below, we can see the quantitative comparison in the datasets. Copying the last observed\nframe through time does a reasonable job in both datasets, however, the impact is larger in UCF101\nVideos in the KTH dataset comprise simple background with minimal camera motion, which allows\nour network to easily predict both foreground and background motion, resulting in better image\nquality scores. However, videos in UCF101 contain more complicated and diverse background which\nin combination with camera motion present a much greater challenge to video prediction networks\nFrom the qualitative results in Sectio [8] and we can see that our network\nperforms better in videos that contain isolated areas of motion compared to videos with dense motion\nA simple copy/paste operation of the last observed frame, ensures very high prediction scores in\nvideos where very small motion occur. The considerable score boost by videos with small motion\ncauses the simple copy /paste baseline to outperform MCnet in the overall performance on UCF101\nPeak Signal to Noise Ratio\n\nStructural Similarity\n\nWHET\n\nVerivi\n\n1.0\n\nmm Conv LSTM \u00a3 [Conv stm\n[e@ Conv LSTM + RES ios lem Conv LSTM + RES\nMonet cx 30 lore mcnet\nere MCne v lee Mcnet + RES\nqm MCnet + RES a lgud> Matheiu et al\ndea Copy last frame Sa ait, Copy last frame\n\u00a3\nB20\nc\niy\n45\nx\nI\n[3]\n* 10\n@ 8 10 11 12 13 14 15 16 17 18 19 20 51 2 3 4 5 6 7 3\ntime steps time steps\nio\nHB Conv LSTM [Vm Conv LSTM\n[eal Conv LSTM + RES 0.9 [mean Cony [STM + RES\nMcnet 2 lore mcnet\nove a le-\u00ae MCnet + RES\nqe MCnet + RES G08 Matheiu et al\ndem Copy last frame = \\ailik, Copy last frame\n07\nFe\n50s\nu\n3 os\na\n04\n@ 9 10 11 12 13 14 15 16 17 18 19 20 1 2 3 4 5 6 7 3\ntime steps time steps\nFigure 12: Extended quantitative comparison including a baseline based on copying the last observec\nframe through time.\nIn this section, we show additional quantitative comparison with a baseline based on copying the\nlast observed frame through time for KTH and UCF101 datasets. Copying the last observed frame\nthrough time ensures perfect background prediction in videos where most of the motion comes from\nforeground (i.e. person performing an action). However, if such foreground composes a small part of\nthe video, it will result in high prediction quality score regardless of the simple copying action."}, {"section_index": "15", "section_name": "UCF101 MOTION DISAMBIGUATION EXPERIMENTS", "section_text": "Due to the observed bias from videos with small motion, we perform experiments by measurin;\nthe image quality scores on areas of motion. These experiments are similar to the ones performec\nin [Mathieu et al.| (2015). We compute DeepFlow optical flow (Weinzaepfel et al.||2013) betweer\nthe previous and the current groundtruth image of interest, compute the magnitude, and normaliz\nit to [0,1]. The computed optical flow magnitude is used to mask the pixels where motion wa\nobserved. We set the pixels where the optical flow magnitude is less than 0.2, and leave all othe\npixels untouched in both the groundtruth and predicted images. Additionally, we separate the tes\nvideos by the average /j-norm of time difference between target frames. We separate the test video:\ninto deciles based of the computed average /2-norms, and compute image quality on each decil\u00ab\nIntuitively, the 1% decile contains videos with the least overall of motion (i.e. frames that show th\nsmallest change over time), and the 10\u201c decile contains videos with the most overall motion (i.\nframes that show the largest change over time).\nAs shown in Figure[13] when we only evaluate on pixels where rough motion is observed, MCnet\nreflects higher PSNR and SSIM, and clearly outperforms all the baselines in terms of SSIM. The\nSSIM results show that our network is able to predict a structure (i.e. textures, edges, etc) similar to\nthe grountruth images within the areas of motion. The PSNR results, however, show that our method\noutperforms the simple copy/paste baseline for the first few steps, but then our method performs\nslightly worse. The discrepancies observed between PSNR and SSIM scores could be due to the fact\nthat some of the predicted images may not reflect the exact pixel values of the groundtruth regardless\nof the structures being similar. SSIM scores are known to take into consideration features in the image\nthat go beyond directly matching pixel values, reflecting more accurately how humans perceived\nimage quality.\nPeak Signal to Noise Ratio\n\n35\n\n30\n\n25\n\ni Conv STM\nIKE Conv LSTM + RES.\n@ MCnet\n\nle-@ Mcnet + RES\nl= Matheiu et al\nJali Copy last frame\n\nStructural Similarity\n\n1.0\n\nConv LSTM\n\nlea Conv LSTM + RES\nl@@ MCnet\n\nleme Mcnet + RES\nI@@ Matheiu et al\n\n\\aiith, Copy last frame\n\ntime steps\n\ntime steps\nFigure 13: Extended quantitative comparison on UCF101 including a baseline based on copying the\nlast observed frame through time using motion based pixel mask.\nFigures|15]and how the evaluation by separating the test videos into deciles based on the average\n\u00e92-norm of time difference between target frames. From this evaluation, it is proven that the copy\nlast frame baseline scores higher in videos where motion is the smallest. The first few deciles\n(videos with small motion) show that our network is not just copying the last observed frame through\ntime, otherwise it would perform similarly to the copy last frame baseline. The last deciles (videos\nwith large motion) show our network outperforming all the baselines, including the copy last frame\nbaseline, effectively confirming that our network does predict motion similar to the motion observed\nin the video.\nPeak Signal to Noise Ratio Peak Signal to Noise Ratio Peak Signal to Noise Ratio Peak Signal to Noise Ratio\n\nPeak Signal to Noise Ratio\n\n10\u00b0\u201d decile\n\n28 10 =\n[mm Convistm TE Conv ist\u2122\nImm Conv LSTM + RES lem Conv LSTM + RES\n26 le-@ mcnet > @ Mcnet\nlexe Mcnet + RES Zo. lee Mcnet + RES\nos \\@-@ Matheiu et a & l=@ Matheiu et al\nia Copy last frame = lai, Copy last frame\n2 a\nTos\n20 2\no\n2\n8 a Bor\n16\n1 2 3 7 3\ntime steps time steps\nth deci\n9\" decile\nio .\n30 [mm Convistm I Conv ist\u2122\nIBM Conv LSTM + RES lem Conv LSTM + RES\nve lore mcnet > @ Mcnet\nle-\u00ae Mcnet + RES \u00a3 lee Mcnet + RES\n_ J@@ Matheiu et al Boo J@@ Matheiu et al\nlati. Copy last frame = lai Copy last frame\n24 a\ng\n2 5 os\no\n20 3\npty\n0.7\n1 2 3 7 3 1 2 3 4 5 7 3\ntime steps time steps\nth 7\n8\" decile\nio .\nConv ist [mm Conv stm\nso IM Conv LSTM + RES [eM Conv LSTM + RES\nlore mcnet > @ Mcnet\nle-@ Mcnet + RES \u00a3 lee Mcnet + RES\nl=@ Matheiu et al 6 1@@ Matheiu et al\nlatin, Copy last frame -\u00b0 aia, Copy last frame\n25 a\n3\n5\nTos\n20 2\n1 2 5 7 3 1 2 3 4 5 7 3\ntime steps time steps\nth 7\n7\" decile\nio\nso Conv ist Conv LsT\u2122\nConv LSTM + RES Ie Conv LSTM + RES\nsa Mcnet > @ Mcnet\nMcnet + RES \u00a3 leme Mcnet + RES\n2 J=@ Matheiu et al 5 I@@ Matheiu et al\nlatin Copy last frame = os lati Copy last frame\n30 a\n28 e\n2\n26 ref\n3\n24 5 08\noa | \u201d\n2\n1 2 5 7 3 1 2 3 4 5 7 3\ntime steps time steps\nath 7\n6\" decile\nio\n35 [Nm Convistm Conv LsT\u2122\nImm Conv LSTM + RES IM Conv LSTM + RES\nlore Mcnet > lore mcnet\nle=e Mcnet + RES 4 le-\u00ae Mcnet + RES\n\\@-@ Matheiu et a 5 l@@ Matheiu et al\n30 ia Copy last frame Fz lai, Copy last frame\na\nB08\n25 2\nu\na\n20\n1 2 5 7 3 1 2 3 4 5 7 3\n\ntime steps\n\ntime steps\nFigure 14: Quantitative comparison on UCF101 using motion based pixel mask, and separatin;\ndataset by average /5-norm of time difference between target frames.\nPeak Signal to Noise Ratio\n\n40\n\n35\n\n30\n\n25\n\nPeak Signal to Noise Ratio\n\n45\n\n40\n\n35\n\n30\n\nPeak Signal to Noise Ratio\n\n25\n\n45\n\n40\n\n35\n\n30\n\nPeak Signal to Noise Ratio\n\n25\n\n50\n\n45\n\nPeak Signal to Noise Ratio\n\n5\u00a2 decile\n\n1.0\n\n[Bam Convist TB Conv sti\nIEAM Conv iSTM + RES Conv LSTM + RES\nlore mcnet - Mcnet\nlexe Mcnet + RES \u00a3 Mcnet + RES\nJ@=@ Matheiu et al o Matheiu et al\nLadi Copy last frame = Copy last frame\n= PY\na\nFe\n2\nG09\n2\na\n3 7 3 1 2 3 4 5 6 7 3\ntime steps time steps\nth deci\n4\" decile\nio\n[mm Convistm\nIM Conv LSTM + RES\nlove Mcnet -\nleme Mcnet + RES 2\n}@@ Matheiu et al o\natti Copy last frame =\nE\na\ng\n2 |f\nZ 0.9} mea Conv LSTM + RES\n5 |leve mcnet\nGH |lee mcnet + res\n\\@=@ Matheiu et al\nalli Copy last frame\n3 7 3 1 2 3 4 5 6 7 3\ntime steps time steps\nrd deci\n3\u201d decile\nio\n[mm Convistm\nIKE Conv LSTM + RES\nJere Mcnet -\nle-@ Mcnet + RES 2\n\\@=@ Matheiu et al 5\nlatin Copy last frame =\nig oe\n3 mmcowism _\nJ |e conv isto + Res\n5 lee mcnet\nGH __|leme mcnet + res\n0.9 Hom Matheiu et al\n\nath, Copy last frame\n\ntime steps\n\n24 decile\n\n[iH Conv ist\n|e Conv LSTM + RES\nlore mcnet\n\n}@=@ MCnet + RES\nl@=@ Matheiu et a\n\n|aiiia. Copy last frame\n\nStructural Similarity\n\n1.0\n\n2\n\n3\n\n4 5 6 7 3\ntime steps\n\nJe=@ Mcnet + RES\n\n= Matheiu et al\n\naii, Copy last frame\n\ntime steps\n\nConv isTM\n|e Conv LSTM + RES\nlore mcnet\n\n}@=@ MCnet + RES\n@=@ Matheiu et a\n\n|aiiia Copy last frame\n\nStructural Similarity\n\n2\n\n3\n\n4 5 6 7 3\ntime steps\n\nNH Conv isT\u2122\nEAE Conv LSTM + RES\n@:@ Mcnet\n\nJ@=@ Mcnet + RES\n\n\\- Matheiu et al\n\nalah Copy last frame\n\ntime steps\n\n2\n\n3\n\n4 5 6 7 3\ntime steps\nFigure 15: Quantitative comparison on UCF101 using motion based pixel mask, and separatin;\ndataset by average /5-norm of time difference between target frames."}, {"section_index": "16", "section_name": "D ADVERSARIAL TRAINING", "section_text": "(2015) proposed an adversarial training for frame prediction. Inspired by[Goodfellow|\n(2014), they proposed a training procedure that involves a generative model G and a discrimi-\nnative model D. The two models compete in a two-player minimax game. The discriminator D is\noptimized to correctly classify its inputs as either coming from the training data (real frame sequence)\nor from the generator G' (synthetic frame sequence). The generator G' is optimized to generate frames\nthat fool the discriminator into believing that they come from the training data. At training time,\nD takes the concatenation of the input frames that go into G and the images produced by G. The\nadversarial training objective is defined as follows:\nmin max log D ([X1:, Xt41:44+7]) + log (1 \u2014 D ([xX1u, G (X1:2)])):\nLoan = \u2014 log D ([X1:t,; G (X1:2)]),\nLuise = \u2014 log D ([X14,Xt41447]) \u2014 log (1 \u2014 D ([xX14,G (&12)])) |\nwhere we optimize the parameters of D to minimize Lagisc, while the parameters of G stay untouched\nD tells us whether its input came from the training data or the generator G. Alternating between\n\nthe two objectives, causes G to generate very realistic images, and D not being able to distinguish\nbetween generated frames and frames from the training data.\nwhere [., .] denotes concatenation in the depth dimension, x;., denotes the input frames to G, X14.1:447\nare the target frames, and G (x1:1) = X:+1:247 are the frames predicted by G. In practice, we split the\nminimax objective into two separate, but equivalent, objectives: Coan and Laisc. During optimization,\nwe minimize the adversarial objective alternating between Coan and Legisc. Lgan is defined by\nwhere we optimize the parameters of G to minimize Cgan while the parameters of D stay untouched.\nAs a result, G is optimized to generate images that make D believe that they come from the training\ndata. Thus, the generated images look sharper, and more realistic. Lgisc is defined by\nCaise = \u2014 log D ([X14,Xi41447]) \u2014 log (1 \u2014 D ([xiu, G (x14)])),"}]
BkIqod5ll
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Convolutional Neural Networks (CNNs) (LeCun et al.|/1998) are variants of multi-layer perceptrons\n\nthat have been inspired by biological cells in the visual cortex. The cells act as local filters over the\ninput space and are well-suited to exploit the strong local spatial correlation present in natural images\n(Hubel & Wiesel] |1968). In recent years, following a breakthrough by |Krizhevsky et al.|(2012) at the\n2012 ImageNet challenge, CNN has repeatedly demonstrated significant improvements in a large\nnumber of computer vision problems.\nThe major success of CNN for visual data is justly credited to the convolution. But its strength -\ndependent on three crucial underlying attributes found in visual data.\nThe main contribution of this work is a generalization of CNNs to general graph-structured data\ndirected or undirected, offering a single method that incorporates the structural information present in\nthe graph of the features into supervised learning algorithms. Due to the active research on learning\nthe graph structure of features, this proves to be quite a general framework. As demonstrated by the\nexamples, large number of standard continuous regression and classification problems fall within the"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "1. Local connectivity assumption: The signal in visual data tends to be highly correlated in\nlocal regions, and mostly uncorrelated in global regions.\n\n2. Shared weights assumption: The same convolution is globally valid across the image,\nresulting in a significant parameter reduction.\n\n3. Grid structure of the image: Enabling a straight forward re-scaling of the feature layers\nthrough the process of max pooling.\nThese assumptions make it challenging to duplicate the success of CNN on a different data structure\nNevertheless, CNNs have also proved effective for non-image data, usually relying on the grid\nstructure of the inputs. Results on acoustic data (Hinton et al.|/2012), videos and\n\neven Go board (Silver et al.|/2016) indicate that it might be sensible to generalize CNN on other data\nstructures that lack the under-lying grid structure.\nThe main hurdle for generalizing CNNs to graph-structured data is to find a corresponding generalized\nconvolution operator. We first consider a random walk on the graph in order to select the top k\nneighbors for every node during the pre-processing step, as Figure[I]shows. Then during the training\nprocess, the convolution is performed as an inner product of the weights and the selected top neighbors\nof the corresponding node in the preference order. Thus the weights are shared by each node and\nreflect the dependency between each node and its closest neighbors. When an image is considered as\nan undirected graph, this convolution operation is the same as the standard convolution. The proposed\nconvolution is also applicable when the graph structure varies between observations.\nIn order to demonstrate the potential of the suggested method, we perform a set of experiments on\nthe Merck molecular activity challenge and the MNIST data sets. The Merck molecular activity\nchallenge data can be seen as a standard regression problem with significant correlation between\nthe features. Essentially, for any regression or classification problem, the data can be visualized as a\ngraph and its correlation matrix can be used to learn the corresponding graph structure. By treating\nthe data as a graph, we show that a simple application of the graph convolutional neural network\ngives results that are comparable to state-of-the-art models."}, {"section_index": "2", "section_name": "2 LITERATURE REVIEW", "section_text": "Standard CNN architectures use a fixed-dimensional input which makes it difficult to apply them\n\non data with changing graph-structure. Recently,|Kalchbrenner et al.|(2014) developed a CNN for\n\nmodeling sentences of varying lengths. Another interesting example of a convolution over a changing\n\ngraph structure has recently been suggested by (2015).\nSeveral deep neural networks have been suggested in the past for predicting the properties o!\n\nmolecules (for example, {Glen et al.|(2006) and{Lusci et al.|(2013)). One of the proposed ideas is t\u00a2\n\nextract features from the molecular structure into a fixed-dimensional feature vector and then use i\nFigure 1: Visualization of the graph convolution size 5 . For a given node, the convolution is applied on the\nnode and its 4 closest neighbors selected by the random walk. As the right figure demonstrates, the random walk\ncan expand further into the graph to higher degree neighbors. The convolution weights are shared according to\nthe neighbors\u2019 closeness to the nodes and applied globally on all nodes.\nGraph theory has been heavily studied in the last few decades, both from mathematical and statisti-\ncal/computational perspectives, with a large body of algorithms developed for a variety of problems\nDespite that, research on algorithms that incorporate CNNs with graph structured-data is still emerg:\ning. The idea of extending CNN to graph-structured data was recently explored by (2013}\nand |Henaff et al.|(2015). They suggested two solutions. The first uses multi-scale clustering to define\nthe network architecture, with the convolutions being defined per cluster without any weight sharing\nThe second defines the convolution through the eigen-values of the graph Laplacian, weighting out\nthe distance induced by the graph\u2019s similarity matrix. The drawback of the methods is that there is no\n2asy way to induce weight sharing among the different nodes of the graph. Also, these methods only\nhandle inputs of a fixed size as the graph structure is fixed.\nas an input in a machine learning method. Specifically, Duvenaud and Maclaurin|Duvenaud et al\n5), propose a neural network to extract features or molecular fingerprints from molecules that can\nof arbitrary size and shape. Their neural network consists of layers which are local filters being\napplied to all the nodes and its neighbors. After using several such convolutional layers to create\nrepresentations of the original data, they apply a global pooling step to features and feed that into a\nstandard classifier. However, this method is limited in its ability to propagate information across the\ngraph, limited by the depth of the network in its pooling stage.\nThe problem of selecting nodes for a convolution on a graph is a particular instance of the problem o\n\nselecting local receptive fields in a general neural network. The work of|Coates & Ng|(2011) sugges\nto select the local receptive fields in a general neural network according to closest neighbors induce\nby the similarity matrix."}, {"section_index": "3", "section_name": "3 GRAPH CONVOLUTIONAL NEURAL NETWORK", "section_text": "The key step which differentiates CNNs on images from regular neural networks, is the selection of\nneighbors on the grid in a k x k window combined with the shared weight assumption. We propose a\nconvolution operator analogous to the convolution performed on images in standard CNNs. In order\nto select the local neighbors of a given node, we use the graph transition matrix and calculate the\nexpected number of visits of a random walk starting from the given node. The convolution would\nthen be applied on the nodes being visited the most. In this section we discuss the application of\nthe convolution in a single layer on a single graph. It is immediate to extend the definition to more\ncomplex structures, and it will be explicitly explained in We introduce some notation in order tc\nproceed into further discussion.\nIf the graph structure is unknown, it can be learned using several unsupervised or supervised graph\nlearning algorithms. Learning the data graph structure is an active research topic and is not in the\nscope of this paper. The interested reader can start with|Belkin & Niyogi|(2001), and|Henaff et al\ndiscussing similarity matrix estimation. We use the absolute value of the correlation matrix as\nthe similarity matrix, following/Roux et al.|(2008) who showed that correlation between the features\nis usually enough to capture the geometrical structure of images. That is, we assume\nIn contrast to previous research, we suggest a novel efficient convolution that captures the local\nconnectivity reflected in the graph structure. The convolution weights are shared among the different\nnodes and can even be applied to changing graph structures. We do so by considering the closest\nneighbors obtained in a random walk, using information contained in the similarity matrix.\nNotation: Let G = (V,\u20ac) be a graph over a set of N features, V = (Xj,..., Xn), anda set of\nedges \u20ac. Let P denote the transition matrix of a random walk on the graph, such that P;; is the\nprobability to move from node X; to Xj. Let the similarity matrix and the correlation matrix of the\ngraph be given by S and R respectively. Define D as a diagonal matrix where D;; = S~. S;;.\nThis work assumes the existence of the graph transition matrix P. This is not a restriction. If graph\nstructure of the data is already known, i.e. if the similarity matrix S' is already known, then the\ntransition matrix can be obtained, as explained in/Lovasz et al.|(1996), by\nOnce we derive the transition matrix P, we define Q) := S>7_, P*, where [P*];; is the probability\nof transitioning from X; to X,; in k steps. That is,\nk\nQO =1, QU =I+P,---,Qh =yoPK.\ni=0\na random walk on the graph. As k increases we incorporate neighbors further away from the node\nwhile the act of summation still gives proper weights to the node itself and its closest neighbors\nFigure[2|provides a visualization of the matrix Q over the 2-D grid.\nTo the best of our knowledge, this is the first use of the expected number of visits on a graph to\n\nselect neural nets architecture. [Coates & Ng and others suggest using the similarity matrix\nThis definition extends the notion of the similarity matrix, since Q\u201c agrees with the variable order\ninduced by the similarity matrix. Furthermore, higher powers of k emphasize more on the graph\n\nstructure of the data, giving major hubs more weight. This might be valuable, for example, in social\nnetwork data.\nTO thy 2, 0 NG hi 2, VG,\n\nh that QnA) > Qi x! (2) >.> Qin):\nThe notion of ordered distance between the nodes is a global feature of all graphs and nodes.\nTherefore, we can take advantage of it to satisfy the desired shared weights assumption. We define\nConv, as the size p convolution over the graph G with nodes x \u20ac R\u2122 and weights w \u20ac R?, for the\np nearest neighbors of each node, as the inner product:\nConv, (x)\n\nal\n7a (1)\n\nran\nPS (1)\n\nTA (1)\n\nky\n\nTa (p)\nky\n\n\u201caS\u201d (p)\n\n7) (p)\n\nWi\nwe\n\nWw,\n\n\u2019\n\nwhere x =\n\nxy\nx2\nThe order of the weights follows from the distance induced by the transition matrix. That is, w;\nwill be convoluted with the variable which has the largest value in each row according to the matrix\nQ*), For example, when Q() = I + P, wy will always correspond to the node itself, and we will\ncorrespond to the node\u2019s closest neighbor. For higher values of k, the order will be defined by the\nunique graph structure. An interesting attribute of this convolution, as compared to other convolutions\non graphs is that, it preserves locality while still being applicable over different graphs with different\nstructures.\nIt should be noted that Conv is susceptible to the effects of negative correlation between the features\nand does not take into account the actual distance between the nodes (it only uses that for the selectior\nof the closest neighbors of a node). Since the weights are being learned globally, in order to accoun\nfor that, we have also defined Convo as:\nIn practice Conv performs slightly better than Conv, although the major differences between them\nare mostly smoothed out during the training process.\nAn important feature of the suggested convolution is the operation complexity. For a graph with\nN nodes, a single p level convolution only requires O(N - p) operations, where p is a very small\nConv, (x)\n\nnL\n{a\n\naS\na)\n\nran\nTO) (1)\n\n)\nTa (p)\nky\n\u201caS\u201d (p)\n\ni)\nazn (p)\n\nWi\nwe\n\nWw,\n\n\u2019\n\nwhere x =\n\nxy\nx2\n\n(4\nx1)\n\nYon (1\nConv2(x) = ms @\n\nIn (a)\nca\n\nx2\nwhere x =\n\nFri (p)\nYa, (p)\n\nYon (p)\n\nWi\nwe\n\nand yj = sign(Rij) Qij aj, Vi=1,...,N, fj =1,...\n\n(5)\nwW1\n71m? (p) ws |\nAL ry\") (1) Yo n6)(p) |\nene : Wp\nConv2(x) : . Ys an) (p)\nIn N\n, Vi= N,j=\na VG ,\nm2 j sign(Ri;) Qij \u00a9;\n\u00b0 and yj;\nyhere x\n\nI=IN\nFigure 2: Visualization of a row of Q on the graph generated over the 2-D grid at a node near the center\nwhen connecting each node to its 8 adjacent neighbors. For k = 1, most of the weight is on the node, wit\nsmaller weights on the first order neighbors. This corresponds to a standard 3 x 3 convolution. As k increases\nthe number of active neighbors also increases, providing greater weight to neighbors farther away, while still\nkeeping the local information.\nnatural number (the number of neighbors considered). The major computational effort goes in the\ncomputation of Q which is being done once per graph structure in the pre-processing step.\nThe selection of the value of k is data dependent, as with every hyper-parameter. But there are two\nmain components affecting its value. Firstly, it is necessary for k to be large enough to detect the top\np heighbors of every node. If the transition matrix P is sparse, it might require higher values of k\nSecondly, from properties of stochastic processes, we know that if we denote 7 as the Markov chain\nstationary distribution, then\n(k)\nQs\n\n=1; Vi,j.\n\nkoo\nThis implies that for large values of k, local information will be smoothed out and the convolution\nwill repeatedly be applied on the features with maximum connections. For this reason, we suggest\n\nthe value of k to be kept relatively low (but large enough to capture sufficient amount of features\nwhen needed).\nSimilar to standard convolution implementation (Chellapilla et al.}|2 it is possible to represent\nthe graph convolution as a tensor dot product, transferring most of the computational burden to the\n\nGPU while using highly optimized matrix product libraries.\nFor every graph convolution layer, we have as an input a 3D tensor of observations, their features\nand depth. We first extend the input with an additional dimension that includes the top p neighbors o!\neach feature selected by Q\u2018*), transforming the input dimension from 3D to 4D tensor as\nNow for every graph convolution layer, the weights are a 3D tensor with the dimension o!\n(Neighbors, Depth, Filters). Therefore application of a graph convolution which is a tensor do\nproduct between the input and the weights, along the (Neighbors, Depth) axes, results with an outpu\ndimension:\n( (Observations, Features) , (Neighbors, Depth) ) e ( (Neighbors, Depth) , (Filters) )\n\n= (Observations, Features, Filters) .\n( (Observations, Features) , (Neighbors, Depth) ) e ( (Neighbors, Dep\u2019\n\n= (Observations, Fea\nImplementation of the algorithm has been done using Keras and Theano libraries in Python, inheriting\nall the tools provided by the libraries to train neural networks, such as dropout regularization.\nadvanced optimizers and efficient initialization methods. Source code will be publicly available prior\nto the ICLR conference on the authors\u2019 website.\n) e { (Neighbors, Depth) , (Filters) }\n\n= (Observations, Features, Filters) .\nFigure 3: Left: Visualization of the correlation matrix between the first 100 molecular descriptors (features) in\nthe DPP4 Merck molecular activity challenge training set. The proposed method utilizes the correlation structure\nbetween the features. Right: Convergence of R? for the different methods on the test set. The graph convolution\nmakes the convergence steadier by reducing the number of parameters."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "In order to test the feasibility of the proposed CNN on graphs, we have conducted experiments on\nwell known data sets functioning as benchmarks - Merck molecular activity challenge and MNIST\nBoth the data sets are popular and well-studied challenges in computational biology and computer\nvision respectively.\nIn all the implementations we kept the architecture shallow and simple, instead of deep and complex\nThis was done to enable better comparisons between the models, and reduce the chance of over-fitting\nthe test set by the model selection process. The hyper-parameters were chosen arbitrarily when\npossible rather than being tuned and optimized. Nevertheless, we still report state-of-the-art, 01\ncompetitive results on the experimented data sets.\nIn this section, we denote a graph convolution layer with k feature maps by C;, and a fully connected\nlayer with k hidden units by F'C,,."}, {"section_index": "5", "section_name": "4.1 MERCK MOLECULAR ACTIVITY CHALLENGE", "section_text": "The Merck molecular activity is a Kaggle|'|challenge which is based on 15 molecular activity data\nsets. The target is to predict activity levels for different molecules based on the structure between the\n\ndifferent atoms in the molecule. This helps to identify molecules in medicines which hit the intended\ntarget and do not cause side effects.\nFollowing {Henaff et al.|(2015), we apply our algorithm on the DPP4 dataset. DPP4 contains 6148\ntraining and 2045 test molecules. Some of the features of the molecules are very sparse and are\nonly active in a handful number of molecules. For these features, the correlation estimation is not\nvery accurate. Therefore we use features that are active in at least 20 molecules (observations). This\nresults in 2153 features. As can be seen in Figure}3} there is significant correlation structure between\ndifferent features. This implies strong connectivity among the features which is important for the\n\napplication of the proposed method.\nThe training was performed using Adam optimization procedure (Kingma & Ba] |2014) where th\ngradients are derived from back-propagation algorithm. We used learning rate, a = 0.001, fixec\nthe number of epochs to 40 and implemented dropout regularization on every layer during the\noptimization procedure. The correlation matrix absolute values were used to learn the graph structure\nWe found that a small number of nearest neighbors (p) between 5 to 10 works the best, and usec\np = 5 in all models.\nSince this is a regression problem, we used the root mean-squared error loss (RMSE). Following the\nstandard set by the Kaggle challenge, results are reported in terms of the squared correlation (R2),\nMethod\n\nArchitecture\n\nR?2\n\nOLS Regression\nRandom Forest\nMerck winner DNN\nSpectral Networks\nSpectral Networks\n(supervised graph)\nFully connected NN\nGraph CNN\n\nGraph CNN\n\nGraph CNN\n\nCoa- Ps - Coa - Ps - FCi000\nCi6- Py - Cre - Pa - FC 1000\n\nFC300-F' C100\nCio\nCio- FCio0\nCio- C20- FC300\n\n0.135\n0.232\n0.224\n0.204\n0.277\n\n0.192\n0.246\n0.258\n0.268\nwhere Y is the actual activity level and Y is the predicted one.\nThe convergence plot given in Figure[3]demonstrates convergence of the selected architectures. The\ncontribution of the suggested convolution is explained in view of the alternatives:\nTable[1| provides more thorough R? results for the different architectures explored, and compares i\nto two of the winners of the Kaggle challenge, namely the Deep Neural Network and the Randon\nforest in{Ma et al.|(2015). We perform better than both the winners of the Kaggle contest. Since\n\nthe challenge is already over, and we had full access to the test set, the results should mostly be\nconsidered as a proof of concept.\nThe models in|Henaff et al. 2015) and|Bruna et al. 2013) use a spectral approach, and currently\n\nhold the state-of-the-art. In comparison to them, we perform better than the Spectral Networks CNN\non unsupervised graph structure, which is equivalent to what was done by using the correlation\nmatrix as similarity matrix. The one using Spectral Networks on supervised graph structure holds the\nstate-of-the-art by learning the graph structure. This is a direction we have not yet explored, as graph\nlearning is beyond the scope of this paper, although it will be straightforward to apply the proposed\ngraph CNN in a similar way to any learned graph."}, {"section_index": "6", "section_name": "4.2. MNIST DATA", "section_text": "The MNIST data often functions as a benchmark data set to test new machine learning methods.\nWe have experimented with two different graph structures for the images. First, we considered the\nimages as observations from an undirected graph on the 2-D grid, where each pixel is connected to its\n8 adjunct neighbors pixels. We used the convolutions over the grid structure as presented in Figure [2]\nand Q(?) with p = 25 as the number of nearest neighbors. Due to the symmetry of the graph in most\nregions of the image, many pixels has equal distance from the pixel being convoluted. If ties were\nbroken in a consistent manner, this example would be reduced to the regular convolution on a 5 x 5\nwindow for exactly the entire space but pixels 3 steps away from the boundary. In order to make\nthe example more compelling, we have broken ties arbitrarily, making the training process harder\ncompared to regular CNN. Imitating LeNet, with Con. Poolingssy5\\. Cxn, Poolingssy.\\ . F Cin\nTable 1: The squared correlation between the actual activity levels and predicted activity levels, R?\nfor different methods on DPP4 data set from Merck molecular activity challenge.\nR? =\n=C\n0\nr(Y,Y\na De\ne Fully connected Neural Network: Models first applying convolution, followed by fully\nconnected hidden layer converge better than more complex fully connected models. Further-\nmore, convergence is more stable in comparison to the fully connected models, due to the\nparameter reduction.\n\ne Linear Regression: Optimizing over the set of convolutions is often considered as automa-\ntion of the feature extraction process. From that perspective, a simple application of one\nlayer of convolution, followed by linear regression, significantly outperforms the results of a\nstandalone linear regression.\nMethod\n\nError Rate (%)\n\n# of Parameteres\n\nLogistic Regression\nC29\nC29 \u2014 Cro\nCoq \u2014 FCs12\nFOs12 \u2014 FCs12\n\n7.A9\n2.24\n1.71\n1.39\n1.42\n\n7,180\n143, 410\n145, 970\n\n7, 347, 862\n635, 402\nTable 2: Error rates of different methods on MNIST digit recognition task\nfollowed by a linear classifier, resulted with 1.1% error rate. This is a worse than a regular CNN\nwhich achieves with similar architecture around 0.75% error rate, and better than a fully connectec\nneural network which achieves around 1.4%, as expected from the complexity differences of the\nmodels.\nSecond, we used the correlation matrix to estimate the graph structure directly from the pixels. Since\nsome of the MNIST pixels are constant (e.g the corners are always black), we restricted the data only\nto the active 717 pixels not constant. We used Q\u201c) with p = 6 as the number of neighbors. This\nwas done in order to ensure that the spatial structure of the image no longer effect the results. With\nonly 6 neighbors, and a partial subset of the pixels, the relative location of the top correlated pixels\nnecessary varies per pixel. As a result, regular CNN are no longer applicable on the data, and we\nhave compared the performance to fully connected Neural Networks.\nTable|2|present the experiment results. During training a dropout rate of 0.2 has been applied on al\nlayers to prevent overfitting. In all the experiments the final layer is the standard softmax logistic\nregression classifier. The Graph CNN perform on par with fully connected neural networks, witl\nlower number of parameters. Also a single layer of graph convolution, followed by logistic regressior\ngreatly improve the performance of logistic regression, demonstrating the potential of the grapt\nconvolution for feature extraction purposes. As with regular convolutions, C'29 \u2014 F'C's12 had ove:\n7M parameters, due to the fact that the convolution uses small amount of parameters to generat\ndifferent maps of the input. This implies that the graph convolution might be even more effective\nwith the development of an efficient pooling methods on graphs, a problem that will be covered ir\nfuture research."}, {"section_index": "7", "section_name": "5 CONCLUSIONS", "section_text": "We suggest a method to address the problem of supervised learning over graph-structured data, b\nextending convolutional neural networks to graph input. Our main contribution is a new way to defin\na convolution over graph that can handle different graph structures as its input. The convolution ca\nbe applied on standard regression or classification problems by learning the graph structure in the dat\nusing the correlation matrix, or other methods. Compared to a fully connected layer, the suggeste\nconvolution has significantly lower number of parameters, while providing stable convergence an\ncomparable performance. We validated and demonstrated the predictive performance of our propose\nmethod on benchmark machine learning data sets such as: the Merck Molecular Activity data set an\nMNIST data.\nConvolutional Neural Networks have already revolutionized the field of computer vision, speec!\nrecognition and language processing. We think an important step forward is to extend it to all othe\nproblems which have an inherent graph structure within them."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Alessandro Rinaldo, Ruslan Salakhutdinov and Matthew Gormley fo:\nsuggestions, insights and remarks that has greatly improved the quality of this paper."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Belkin, Mikhail and Niyogi, Partha. Laplacian eigenmaps and spectral techniques for embedding anc\nclustering. In NJPS, volume 14, pp. 585-591, 2001.\nCoates, Adam and Ng, Andrew Y. Selecting receptive fields in deep networks. pp. 2528-2536, 2011.\nHinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep.\nSenior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural networks\nfor acoustic modeling in speech recognition: The shared views of four research groups. Signal\nProcessing Magazine, IEEE, 29(6):82\u201497, 2012.\nHubel, David H and Wiesel, Torsten N. Receptive fields and functional architecture of monkey striate\ncortex. The Journal of physiology, 195(1):215-243, 1968.\nKingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint\narXiv: 1412.6980, 2014.\nKrizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep\nconvolutional neural networks. In Advances in neural information processing systems, pp. 1097-\n1105, 2012.\n\nLe, Quoc V, Zou, Will Y, Yeung, Serena Y, and Ng, Andrew Y. Learning hierarchical invariant\nspatio-temporal features for action recognition with independent subspace analysis. In Computer\nVision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 3361\u20143368. IEEE, 2011.\nLeCun, Yann, Bottou, L\u00e9on, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied\nto document recognition. Proceedings of the IEEE, 86(11):2278\u20142324, 1998."}]
rJfMusFll
[{"section_index": "0", "section_name": "BATCH POLICY GRADIENT METHODS FOR\nIMPROVING NEURAL CONVERSATION MODEL:", "section_text": "Kirthevasan Kandasamy *\nCarnegie Mellon University, Pittsburgh, PA, USA\nkandasamy@cs.cmu.edu\nryoto,dtarlow, dacart }@microsoft .com\nWe study reinforcement learning of chatbots with recurrent neural network archi.\nlectures when the rewards are noisy and expensive to obtain. For instance, a chat.\npot used in automated customer service support can be scored by quality assurance\nagents, but this process can be expensive, time consuming and noisy. Previous re-\ninforcement learning work for natural language processing uses on-policy update:\nand/or is designed for on-line learning settings. We demonstrate empirically tha\nsuch strategies are not appropriate for this setting and develop an off-policy batct\npolicy gradient method (BPG). We demonstrate the efficacy of our method via\nseries of synthetic experiments and an Amazon Mechanical Turk experiment on <\nrestaurant recommendations dataset."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Nonetheless, there are some important differences in the above scenario when compared to the more\npopular approaches for RL.\nYoram Bachrach\u2019\nDigitalGenius Ltd., London, UK"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Chatbots are one of the classical applications of artificial intelligence and are now ubiquitous in\ntechnology, business and everyday life. Many corporate entities are now increasingly using chatbots\nto either replace or assist humans in customer service contexts. For example, Microsoft is currently\nactively building a chat bot to optimise and streamline its technical support service.\nIn these scenarios, there is usually an abundance of historical data since past conversations between\ncustomers and human customer service agents are usually recorded by organisations. An apparently\nstraightforward solution would be to train chatbots to reproduce the responses by human agents\nusing standard techniques such as maximum likelihood. While this seems natural, it is far from\ndesirable for several reasons. It has been observed that such procedures have a tendency to produce\nvery generic responses (Sordoni et al., 2015). For instance, when we trained chatbots via maximum\nlikelihood on a restaurant recommendations dataset, they repeatedly output responses to the effect\nof How large is your group?, What is your budget? etc. Further, they also produce responses\nsuch as Let me look that up. or Give me a second. which, although permissible for a human\nagent to say, are not appropriate for a chatbot. Although there are ways to increase the diversity\nof responses (Li et al., 2015), our focus is on encouraging the bot to meaningfully advance the\nconversation. One way to address this problem is to provide some form of weak supervision for\nresponses generated by a chatbot. For example, a human labeller, such as a quality assurance agent,\ncould score each response generated by a chatbot in a conversation with a customer. This brings us\nto the reinforcement learning (RL) paradigm where these rewards (scores) are to be used to train a\ngood chatbot. In this paper we will use the terms score, label, and reward interchangeably. Labelled\ndata will mean conversations which have been assigned a reward of some form as explained above.\ne Noisy and expensive rewards: Obtaining labels for each conversation can be time consuming\nand economically expensive. As a result, there is a limited amount of labelled data available.\nMoreover, labels produced by humans are invariably noisy due to human error and subjectivity.\nIf labelled data is in short supply, reinforcement learning could be hopeless. However, if unlabelled\ndata can be used to train a decent initial bot, say via maximum likelihood, we can use policy iteration\ntechniques to refine this bot by making local improvements using the labelled data (Bellman, 1956).\nBesides chatbots, this framework also finds applications in tasks such as question answering (Fer-\nrucci et al., 2010; Hermann et al., 2015; Sachan et al., 2016), generating image descriptions (Karpa-\nthy & Fei-Fei, 2015) and machine translation (Bahdanau et al., 2014) where a human labeller car\nprovide weak supervision in the form of a score to a sentence generated by a bot.\nTo contextualise the work in this paper, we make two important distinctions in policy iteration\nmethods in reinforcement learning. The first is on-policy vs off-policy. In on-policy settings, the goal\nis to improve the current policy on the fly while exploring the space. On-policy methods are used\nin applications where it is necessary to be competitive (achieve high rewards) while simultaneously\nexploring the environment. In off-policy, the environment is explored using a behaviour policy, but\nthe goal is to improve a different target policy. The second distinction is on-line vs batch (off-line).\nIn on-line settings one can interact with the environment. In batch methods, which is the setting for\nthis work, one is given past exploration data from possibly several behaviour policies and the goal\nis to improve a target policy using this data. On-line methods can be either on-policy or off-policy\nwhereas batch methods are necessarily off-policy.\nIn this paper, we study reinforcement learning in batch settings, for improving chat bots with\nSeq2Seq recurrent neural network (RNN) architectures. One of the challenges when compared\nto on-line learning is that we do not have interactive control over the environment. We can only\nhope to do as well as our data permits us to. On the other hand, the batch setting affords us some\nluxuries. We can reuse existing data and use standard techniques for hyper-parameter tuning based\non cross validation. Further, in on-line policy updates, we have to be able to \u201cguess\u201d how an episode\nwill play out, i.e. actions the behaviour/target policies would take in the future and corresponding\nrewards. However, in batch learning, the future actions and rewards are directly available in the data.\nThis enables us to make more informed choices when updating our policy."}, {"section_index": "3", "section_name": "RELATED WORK", "section_text": "Recently there has been a surge of interest in deep learning approaches to reinforcement learning.\nmany of them adopting Q-learning, e.g. (He et al., 2015; Mnih et al., 2013; Narasimhan et al., 2015).\nIn Q-learning, the goal is to estimate the optimal action value function Q*. Then, when an agent\nis at a given state, it chooses the best greedy action according to Q*. While Q-learning has been\nsuccessful in several applications, it is challenging in the settings we consider since estimating Q*\nover large action and state spaces will require a vast number of samples. In this context, policy\niteration methods are more promising since we can start with an initial policy and make incremental\nlocal improvements using the data we have. This is especially true given that we can use maximum\nlikelihood techniques to estimate a good initial bot using unlabelled data.\nPolicy gradient methods, which fall within the paradigm of policy iteration, make changes to the\nparameters of a policy along the gradient of a desired objective (Sutton et al., 1999). Recently, the\nnatural language processing (NLP) literature has turned its attention to policy gradient methods for\nimproving language models. Ranzato et al. (2015) present a method based on the classical REIN-\nFORCE algorithm (Williams, 1992) for improving machine translation after preliminary training\nwith maximum likelihood objectives. Bahdanau et al. (2016) present an actor-critic method also for\nmachine translation. In both cases, as the reward, the authors use the BLEU (bilingual evaluation\nunderstudy) score of the output and the translation in the training dataset. This setting, where the\nrewards are deterministic and cheaply computable, does not reflect difficulties inherent to training\nchatbots where labels are noisy and expensive. Li et al. (2016) develop a policy gradient method\nbot for chatbots. However, they use user defined rewards (based on some simple rules) which, once\nagain, are cheaply obtained and deterministic. Perhaps the closest to our work is that of Williams &\nZweig (2016) who use a REINFORCE based method for chat bots. We discuss the differences of\ne Off-line evaluations: Unlike conventional RL settings, such as games, where we try to find\nthe optimal policy while interacting with the system, the rewards here are not immediately\navailable. Previous conversations are collected, labelled by human experts, and then given to\nan algorithm which has to manage with the data it has.\n\nwy ay gd ayy 1b dd de\n\n1.4 2.11.4 42.\n\ni\nthis and other methods in greater detail in Section 3. The crucial difference between all of the above\nefforts and ours is that they use on-policy and/or on-line updates in their methods.\nThe remainder of this manuscript is organised as follows. In Section 2 we review Seq2Seq model:\nand Markov decision processes (MDP) and describe our framework for batch reinforcement learn.\ning. Section 3 presents our method BPG and compares it with prior work in the RL and NLF\n\nliterature. Section 4 presents experiments on a synthetic task and a customer service dataset fot\nrestaurant recommendations.\nThe goal of a Seq2Seq model in natural language processing is to produce an output sequence\ny = [a1,a2,..., a7] given an input sequence x (Cho et al., 2014; Kalchbrenner & Blunsom, 2013;\nSutskever et al., 2014). Here a; \u20ac A where A is a vocabulary of words. For example, in machine\ntranslation from French to English, x is the input sequence in French, and y is its translation in En-\nglish. In customer service chatbots, x is the conversation history until the customer\u2019s last query and\ny is the response by an agent/chatbot. In a Seq2Seq model, we use an encoder network to represent\nthe input sequence as a euclidean vector and then a decoder network to convert this vector to an\noutput sequence. Typically, both the encoder and decoder networks are recurrent neural networks\n(RNN) (Mikolov et al., 2010) where the recurrent unit processes each word in the input/output se-\nquences one at a time. In this work, we will use the LSTM (long short term memory) (Hochreiter &\nSchmidhuber, 1997) as our recurrent unit due to its empirical success in several applications."}, {"section_index": "4", "section_name": "2.2 A REVIEW OF MARKOV DECISION PROCESSES (MDP)", "section_text": "We present a formalism for MDPs simplified to our setting. In an MDP, an agent takes an action \u00ab\nin a state s and transitions to a state s\u2019. An episode refers to a sequence of transitions s} + a1 \u2014\n82 > dz > +++ > ap \u2014 S41 until the agent reaches a terminal state s+ . Ata terminal state, th\nagent receives a reward. Formally, an MDP is the triplet (S, A, R). Here, S is a set of states and A ii\na set of actions. When we take an action a at state s we transition to a new state s\u2019 = s\u2018(s,a) which\nin this work, will be deterministic. A will be a finite but large discrete set and S will be discret\nbut potentially infinite. R : S \u2014 R is the expected reward function such that when we receive :\nreward r at state s \u20ac S, E[r] = R(s). Let Sp C S be a set of terminal states. When we transition t\nany s \u20ac So, the episode ends. In this work, we will assume that the rewards are received only at ;\nterminal state, i.e R(s) is nonzero only on So.\nA policy 7 is a rule to select an action at a given state. We will be focusing on stochastic policies\nma: Ax S \u2014 R, where 7(a|s) denotes the probability an agent will execute action a at state s. We\ndefine the value function V\u2122 : S \u2014 R of policy 7, where V(s) is the expected reward at the end o!\nthe episode when we follow policy 7 from state s. For any terminal state s \u20ac So, V\"(s) = R(s)\nregardless of 7. We will also find it useful to define the action-value function Q\u201d :S x A:> R\nwhere (%(s,a) is the expected reward of taking action a at state s and then following policy 7\nWith deterministic state transitions this is simply Q\"(s,a) = V(s\u2019(s,a)). It can be verified tha!\nV\"\u2122s)=E.__,,.,\\/O7(s. a)] (Sutton & Barto. 1998).\nWe now frame our learning from labels scenario for RNN chatbots as an MDP. The treatment has\nsimilarities to some recent RL work in the NLP literature discussed above.\nIn its most basic form, the decoder RNN can be interpreted as assigning a probability distribution\nover A given the current \u201cstate\u201d. At time \u00a2, the state s, is the input sequence x and the words\nYy\u20141 = [a1,..-, a1] produced by the decoder thus far, ie. s, = (x,y\u20141). We sample the\nnext word a, from this probability distribution 7(-|s;), then update our state 5:41 = (x,y) where\nYt = [Ye\u20141, 4], and proceed in a similar fashion. The vocabulary A contains an end-of-statement\ntoken <E0S>. If we sample <E0S> at time 7\u2019 + 1, we terminate the sequence and output yr.\nLet x be the input and y,_; = [a1,...,a,\u20141] be the words output by the decoder until time t. The\nstate of our MDP at time t of the current episode will be s; = (a, y:\u20141). Therefore, the set of\nstates S will be all possible pairs of inputs and partial output sequences. The actions A will be the\nvocabulary. The terminal states So will be (x, y) such that the last literal of y is <E0S>. The stochastic\npolicy 7 will be a Seq2Seq RNN which produces a distribution over A given state s,. When we wish\nto make the dependence of the policy on the RNN parameters 6 explicit, we will write 7. When we\nsample an action a, ~ 7(-|s,), we deterministically transition to state (x, [y:\u20141, a1]). If we sample\nOr.1\u2014 <FNS> at time T + 1. the enisode terminates and we ohserve a stochastic reward.\nNe are given a dataset of input-output-reward triples {(2,y,r)}\", where y -\n\nal? pene af ) , SEOS>) is the sequence of output words. This data was collected from possibly mul\n\nple behaviour policies which output y for the given input 2\u201c). In the above customer servic\nxample, the behaviour policies could be chatbots, or even humans, which were used for conversa\nions with a customer. The rewards r; are scores assigned by a human quality assurance agent t\n\u2018ach response of the chatbot. Our goal is to use this data to improve a given target policy 7\u00bb. W\nvill use gq to denote the distribution of the data. q(s) is the distribution of the states in the datase\n(als) is the conditional distribution of an action given a state, and q(s, a) = q(s)q(als) is the joir\nlistribution over states and actions. q will be determined by the initial distribution of the inputs 2\nind the behaviour policies used to collect the training data. Our aim is to find a policy that does wel\nvith respect to q. Specifically, we wish to maximise the following objective,\nJ(0) = So a(s)V(s).\n\nsES\nHere, the value function V\u201d\u00a2 is not available to us but has to be estimated from the data. This |\nsimilar to objectives used in on-line off-policy policy gradient literature where q is replaced by th\nlimiting distribution of the behaviour policy (Degris et al., 2012). In the derivation of our algorithn\nwe will need to know q(a|s) to compute the gradient of our objective. In off-policy reinforcemer\nlearning settings this is given by the behaviour policy which is readily available. If the behaviot\npolicy if available to us, then we can use this too. Otherwise, a simple alternative is to \u201clearn\u201d a be\nhaviour policy. For example, in our experiments we used an RNN trained using the unlabelled dat\nto obtain values for g(a|s). As long as this learned policy can capture the semantics of natural lar\nguage (for example, the word apple is more likely than car when the current state is (x, I ate an)\nthen it can be expected to do reasonably well. In the following section, we will derive a stochasti\ngradient descent (SGD) procedure that will approximately minimise (1).\nBefore we proceed, we note that it is customary in the RL literature to assume stochastic transitions\nbetween states and use rewards at all time steps instead of the terminal step. Further, the future\nrewards are usually discounted by a discount factor 7 < 1. While we use the above formalism to\nsimplify the exposition, the ideas presented here extend naturally to more conventional settings.\ng(9) = Eyng > ro(a|s) Zolals)\n\nacA 779(a|s)\n= E(5,0,)mate) [elses a1) 6( a1, 54)(Q\" (S14) \u2014 V\"\"(s1))]-\n\nona) = E, soa [p(s,a)v(a,s)Q\u2122(s,0)|\nOur derivation follows the blueprint in Degris et al. (2012) who derive an off-policy on-line actor\ncritic algorithm. Following standard policy gradient methods, we will aim to update the policy by\ntaking steps along the gradient of the objective V.J(@).\nVJI(0) = VEsxq Ss to(als)Q7?(s, a) = Eng Ss Vi9(a|s)Q7? (s, a) + 79(als) VQ (s, a)\n\nacA acA\nThe latter term inside the above summation is difficult to work with, so the first step is to ignore\nit and work with the approximate gradient g(@) = Esxq[)oac.4 Vt0(a|s)Q7?(s,a)] ~ VI(6).\nDegris et al. (2012) provide theoretical justification for this approximation in off policy settings by\nestablishing that J(0) < J(@ + ag(@)) for all small enough a. Expanding on g(0), we obtain:\nHere y(a,s) = wee\n\n9(a|s)/q(a|s) is the importance sampling coefficient. In the last step, we have used the fact that\nE[m(a|s)\u00a2(a|s)h(s)] = 0 for any function h : S \u2014 R of the current state (Szepesvari, 2010). The\npurpose of introducing the value function V\u201d? is to reduce the variance of the SGD updates \u2014 we\nwant to assess how good/bad action a; is relative to how well zg will do at state s; in expectation. If\na, is a good action (Q\u201d\u00b0 (sz, a;) is large relative to V\u201d\u00b0 (s,)), the coefficient of the score function is\npositive and it will change @ so as to assign a higher probability to action a; at state s;.\n\n= Vlog7e(als) is the score function of the policy and p(s,a) =\nFS = (1\u2014A)V\u2122(Se41) + Ap(St,a4)FA,, for t= T,.\nThe purpose of introducing . is to reduce the variance of using the future rewards alone as an esti\nmate for Q\u201d\u00b0(s,, a1). This is primarily useful when rewards are noisy. If the rewards are determin\nistic, \\ = 1 which ignores the value function is the best choice. In noisy settings, it is recommendes\nto use \\ < 1 (see Sec 3.1 of (Szepesvari, 2010)). In our algorithm, we will replace 7) with r) wher\nV7 is replaced with the estimate Vv. Putting it all together, and letting a denote the step size, we\nhave the following update rule for the parameters 6 of our policy:\n6\u00a2 60+ ap(se, ar)w( St, a4) (77 \u2014 V(sz))\nIn Algorithm 1, we have summarised the procedure where the updates are performed after an entir\npass through the dataset. In practice, we perform the updates in mini-batches.\nAn Estimator for the Value Function: All that is left to do is to specify an estimator V for the value\nfunction. We first need to acknowledge that this is a difficult problem: S is quite large and for typical\napplications for this work there might not be enough data since labels are expensive. That said, the\npurpose of V in (2), (3) is to reduce the variance of our SGD updates and speed up convergence\nso it is not critical that this be precise \u2014 even a bad estimator will converge eventually. Secondly,\nstandard methods for estimating the value function based on minimising the projected Bellman er-\nror require the second derivatives, which might be intractable for highly nonlinear parametrisations\nof V (Maei, 2011). For these two statistical and computational reasons, we resort to simple esti-\nmators for V\u2019. We will study two options. The first is a simple heuristic used previously in the\nRL literature, namely a constant estimator for V which is equal to the mean of all rewards in the\ndataset (Williams, 1992). The second uses the parametrisation V(s) = o(\u20ac74(s)) where o is the\nlogistic function and \u00a2(s) \u20ac R\u00a2 is a Euclidean representation of the state. For V(s) of the above\nform, the Hessian V2V(s) can be computed in O(d) time. To estimate this value function, we use\nthe GTD(A) estimator from Maei (2011). As \u00a2(s) we will be using the hidden state of the LSTM.\nThe rationale for this is as follows. In an LSTM trained using maximum likelihood, the hidden\nstate contains useful information about the objective. If there is overlap between the maximum like-\nlihood and reinforcement learning objectives, we can expect the hidden state to also carry useful\ninformation about the RL objective. Therefore, we can use the hidden state to estimate the value\nfunction whose expectation is the RL objective. We have described our implementation of GTD(A)\nin Appendix A and specified some implementation details in Section 4.\n' Note Q\u201d (st, a4) = V7? (se41) for deterministic transitions. However, it is important not to interpret th:\nterm in (2) as the difference in the value function between suc: ve states. Conditioned on the current tim:\nstep, V\u00a2 (sz) is determi c, while V\u00a2 (s,41) is stochastic. In particular, while a crude estimate suffices fo\nthe former, the latter is critical and should reflect the rewards received during the remainder of the episode.\nThe Q\u201d\u00b0, V\u00b0 functions are not available to us so we will replace them with estimates. For V? (s;)\n\nwe will use an estimate V(s;) \u2014 we will discuss choices for this shortly. However, the action value\nfunction is usually not estimated in RL policy gradient settings to avoid the high sample complexity.\nA sensible stochastic approximation for Q7\u00b0(s:,a;) is to use the sum of future rewards from the\ncurrent state (Sutton & Barto, 1998)!. If we receive reward r at the end of the episode, we can\nthen use Q\u2122*(s;,a,) & 7 for all time steps \u00a2 in the episode. However, since q(a;|s,) is different\n\nfrom 79 (az|s,) we will need to re-weight future rewards via importance sampling rThs p(si, ai).\nThis is to account for the fact that an action a given s may have been more likely under the policy\n79(-|s) than it was under q(-|s) or vice versa. Instead of directly using the re-weighted rewards, we\nwill use the so called A-return which is a convex combination of the re-weighted rewards and the\nvalue function (Sutton, 1988; 1984). In our setting, they are defined recursively from the end of the\nepisode t = T+ 1 tot = 1 as follows. For \\ \u20ac (0, 1],\nFAISUERUIEEEE 2 DAH PUMLY UIAUleMt (Dr So)\n\nGiven: Data {(x;, yi, ri) }\"_1, step size a, return coefficient A, initial Oo.\n\u2014 Set 0\u00a2 4%.\n\u2014 For each epoch k = 1,2,...\n> Set Ad 0\n> For each episode i = 1,...,n\ne Thay Hr\n\u00a9 pr & o(al|s\\) /q(at\u201d|s) fort =1,...,T.\ne For each time step in reverset = T\u2122,...,1\n@ rkoa- WV (or) +Apird.y\nGi) AO AO + = ats pih(st?, al? )(r) \u2014 V(s))\n(ii) Compute updates for the value function estimate V.\n> Update the policy @<\u00a2 0+aA0\n\n> Update the value function estimate Vv.\nWM ry \u2014 UG un Se + seu\nL(s\\?, al)(rd \u2014 B(s0))\nPolicy gradient methods have been studied extensively in on policy settings where the goal is t\nimprove the current policy on the fly (Amari, 1998; Williams, 1992). To our knowledge, all RI\napproaches in Seq2Seq models have also adopted on-policy policy gradient updates (Bahdanau et al.\n2016; Li et al., 2016; Ranzato et al., 2015; Williams & Zweig, 2016). However, on policy method:\nbreak down in off-policy settings, because any update must account for the probability of the actior\nunder the target policy. For example, suppose the behaviour policy took action a at state s anc\nreceived a low reward. Then we should modify the target policy 6 so as to reduce 77(a|s). However\nif the target policy is already assigning low probability to als then we should not be as aggressive\nwhen making the updates. The re-weighting p(s, a) via importance sampling does precisely this.\nA second difference is that we study batch RL. Standard on-line methods are designed for setting:\nwhere we have to continually improve the target while exploring using the behaviour policy. Critica\nto such methods are the estimation of future rewards at the current state and the future actions tha\nwill be taken by both the behaviour and target policies. In order to tackle this, previous researct\neither ignore future rewards altogether (Williams, 1992), resort to heuristics to distribute a delayec\nreward to previous time steps (Bahdanau et al., 2016; Williams & Zweig, 2016), or make additiona\nassumptions about the distribution of the states such as stationarity of the Markov process (Degri:\net al., 2012; Maei, 2011). However, in batch settings, the \\-return from a given time step can be\ncomputed directly (3) since the future action and rewards are available in the dataset. Access to this\ninformation provides a crucial advantage over techniques designed for on-line settings."}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "implementation Details: We implement our methods using Chainer (Tokui et al., 2015), and grou\nsentences of the same length together in the same batch to make use of GPU parallelisation. Sinc\ndifferent batches could be of different length, we do not normalise the gradients by the batch siz\nas we should take larger steps after seeing more data. However, we normalise by the length of th\noutput sequence to allocate equal weight to all sentences. We truncate all output sequences to lengt!\n64 and use a maximum batch size of 32. We found it necessary to use a very small step size (10~\u00b0)\notherwise the algorithm has a tendency to get stuck at bad parameter values. While importance re\nweighting is necessary in off-policy settings, it can increase the variance of the updates, especiall,\nwhen q(a;|sz) is very small. A common technique to alleviate this problem is to clip the p(s:, a\nvalue (Swaminathan & Joachims, 2015). In addition to single p(s;,a,) values, our procedure has\n\nproduct of p(s;,a,) values when computing the future rewards (3). The effect of large p values is\n\nlarge weight p;(r> \u2014 V(s,)) for the score function in step (ii) of Algorithm 1. In our implementatior\nFigure 1: Illustration of the encoder\nand decoder RNNs used in our exper-\niments. In this example, the input to\nthe encoder is = (...,4,B,<E0S>)\nand the output of the decoder is y =\n(U,V,W,...). We use four different\nLSTMs for the bottom and top layers\nof the encoder and decoder networks.\nIn our RL algorithms, we only change\nthe top LSTM and the softmax layer\nof the decoder RNN as shown in red\ndashed lines.\nwe clip this weight at 5 which controls the variance of the updates and ensures that a single exampl\ndoes not disproportionately affect the gradient.\nRNN Design: In both experiments we use deep LSTMs with two layers for the encoder and decode:\nRNNs. The output of the bottom layer is fed to the top layer and in the decoder RNN, the output o\nthe top layer is fed to a softmax layer of size |A|. When we implement GTD(A) to estimate V\u2122\nwe use the hidden state of the bottom LSTM as \u00a2(s). When performing our policy updates, we onl;\nchange the parameters of the top LSTM and the softmax layer in our decoder RNN. If we were t\nchange the bottom LSTM too, then the state representation \u00a2(s) would also change as the policy\nchanges. This violates the MDP framework. In other words, we treat the bottom layer as part o\nthe environment in our MDP. To facilitate a fair comparison, we only modify the top LSTM anc\nsoftmax layers in all methods. We have illustrated this set up in Fig. 1. We note that if one is conten\nwith using the constant estimator. then one can change all parameters of the RNN.\nTo convey the main intuitions of our method, we compare our methods against other baselines o1\na synthetic task on the European parliament proceedings corpus (Koehn, 2005). We describe thi\nexperimental set up briefly, deferring details to Appendix B.1. The input sequence to the RNN wa:\neach sentence in the dataset. Given an input, the goal was to reproduce the words in the input withou\nrepeating words in a list of forbidden words. The RL algorithm does not explicitly know either goa\nof the objective but has to infer it from the stochastic rewards assigned to input output sequences 11\nthe dataset. We used a training set of 500 input-output-reward triplets for the RL methods.\nWe initialised all methods by maximum likelihood training on 6000 input output sequences wher\nthe output sequence was the reverse of the input sequence. The maximum likelihood objectiv\ncaptures part of the RL objective. This set up reflects naturally occurring practical scenarios for th\nalgorithm where a large amount unlabelled data can be used to bootstrap a policy if the maximur\nlikelihood and reinforcement learning objectives are at least partially aligned. We trained the RI\nalgorithms for 200 epochs on the training set. At the end of each epoch, we generated outputs fror\nthe policy on test set of 500 inputs and scored them according to our criterion. We plot the test se\nsrror against the number of epochs for various methods in Fig. 2.\nFig. 2(a) compares 3 methods: BPG with and without maximum likelihood initialisation and a\nversion of BPG which does not use importance sampling. Clearly, bootstrapping an RL algorithm\nwith ML can be advantageous especially if data is abundantly available for ML training. Further,\nwithout importance sampling, the algorithm is not as competitive for reasons described in Section 3.\nIn all 3 cases, we used a constant estimator for V and \\ = 0.5. The dashed line indicates the\nperformance of ML training alone. BPG-NIS is similar to the algorithms of Ranzato et al. (2015);\nWilliams & Zweig (2016) except that there, their methods implicitly use \\ = 1.\nFig. 2(b) compares 4 methods: BPG and its on-line version OPG with constant (CONST) and\nGTD(A) estimators for V. The on-line versions of the algorithms are a direct implementation of the\nmethod in Degris et al. (2012) which do not use the future rewards as we do. The first observation\nis that while GTD()) is slightly better in the early iterations, it performs roughly the same as us-\ning a constant estimator in the long run. Next, BPG performs significantly better than OPG. We\nbelieve this is due to the following two reasons. First, the online updates assume stationarity of the\nMDP. When this does not hold. such as in limited data instances like ours. the SGD updates can be\n06\n\noss\n\nos\n\nos,\n\n04\n\n* 9.95\n\n03\n\na\n\nML (No RL)\n\no BPG+ML\n\nos\n\n0s\n\nos\n\nos;\n\n04\n\nos\n\npes\n\n\u00a9 BPG-CONST\n+ BPG-GTD(A)\n% OPG-CONST\nx OPG-GTD(A)\n\n06\n\noss\n\n0.25\n\n30\n\n700 760 200\nNumber of Epochs\n\n30 700 760 200\nNumber of Epochs\n\n50\n\n700 180\nNumber of Epochs\ngfe 055 \u00a9 BPG-CONS oss\nEos + BPG-GTD(A) os\n2 0s % OPG-CONST ous\n\u201c5 045 kK \u00bb\u00ab OPG-GTD(A)\n&\n2 i. 04s\nB04 == ME (No RL)\n4 o BPG+ML) |} =\n\nos + BPG-NIS+ML of\n\n03 % BPG os 025\n\n30 700 7180 200 30 700 180 200 30 700 150 200\nNumber of Epochs Number of Epochs Number of Epochs\n\n(@ \u2014_(b)\n\nnog\nFigure 2: Results for synthetic experiments. (a): Comparison of BPG with and without maximum likelihood\n(ML) initialisation and BPG without importance sampling (BPG-NIS). The dotted line indicates performance\nof ML alone. (b): Comparison of BPG with its online counterparts OPG. We compare both methods using\na constant estimator (CONST) for the value function and GTD(A). (c): Comparison of BPG with different\nvalues of . All curves were averaged over 10 experiments where the training set was picked randomly from a\npool. The test set was the same in all 10 experiments. The error bars indicate one standard error."}, {"section_index": "6", "section_name": "4.2 RESTAURANT RECOMMENDATIONS", "section_text": "We use data from an on-line restaurant recommendation service. Customers log into the service\nand chat with a human agent asking recommendations for restaurants. The agents ask a series of\nquestions such as food preferences, group size etc. before recommending a restaurant. The goal is\nto train a chatbot (policy) which can replace or assist the agent. For reasons explained in Section 1,\nmaximum likelihood training alone will not be adequate. By obtaining reward labels for responses\nproduced by various other bots, we hope to improve on a bot initialised using maximum likelihood.\nData Collection: We collected data for RL as follows. We trained five different RNN chatbots wit\ndifferent LSTM parameters via maximum likelihood on a dataset of 6000 conversations from thi\ndataset. The bots were trained to reproduce what the human agent said (output y) given the pas\nconversation history (input x). While the dataset is relatively small, we can still expect our bots t\ndo reasonably well since we work in a restricted domain. Next, we generated responses from thes\nbots on 1216 separate conversations and had them scored by workers on Amazon Mechanical Tur\n(AMT). For each response by the bots in each conversation, the workers were shown the histor\nbefore the particular response and asked to score (label) each response on a scale of 0 \u2014 1 \u2014 2. W\ncollected scores from three different workers for each response and used the mean as the reward.\nBot-l: H = 512, E = 256. BPG: \\=0.5, GTD(A) estimator for V.\n\nBot-2: H = 400, FE = 400. BPG: \\ = 0.5, constant estimator for Vv.\ne Bot-l: H=512,E=256. Bi\ne Bot-2: H = 400,E =400. BI\nTesting: We used a separate test set of 500 conversations which had a total of more than 3500 input-\noutput (conversation history - response) pairs. For each Bot-1 and Bot-2 we generated responses\nbefore and after applying BPG, totalling 4 responses per input. We then had them scored by workers\non AMT using the same set up described above. The same worker labels the before-BPG and after-\nBPG responses from the same bot. This controls spurious noise effects and allows us to conduct a\npaired test. We collected 16, 808 before and after label pairs each for Bot-1 and Bot-2 and compare\nthem using a paired t-test and a Wilcoxon signed rank test.\nvery noisy. Secondly, the value function estimate plays a critical role in the online version. While\nobtaining a reliable estimate V is reasonable in on-line settings where we can explore indefinitely\nto collect a large number of samples, it is difficult when one only has a limited number of labelled\nsamples. Finally, we compare BPG with different choices for \\ in Fig. 2(c). As noted previously,\nA < 1is useful with stochastic rewards, but choosing too small a value is detrimental. The optimal\nX value may depend on the problem.\nPolicies and RL Application: Next, we initialised 2 bots via maximum likelihood and then used\nBPG to improve them using the labels collected from AMT. For the 2 bots we used the following\nLSTM hidden state size H, word embedding size EF and BPG parameters. These parameters were\nchosen arbitrarily and are different from those of the bots used in data collection described above.\n| Mean(ML) | Mean (BPG+ML) | Paired t-test | Wilcoxon\n\nBot-1 | 0.8951 + 0.0070 | 0.9052 + 0.0069 0.10296 0.07930\nBot-2 | 0.7009 + 0.0066 | 0.7317 + 0.0066 0.00007 0.00017\nResults: The results are shown in Table 1. The improvements on Bot-2 are statistically significan\nat the 10% level on both tests, while Bot-1 is significant on the Wilcoxon test. The large p-values fo\nBot-1 are due to the noisy nature of AMT experiments and we believe that we can attain significance\nif we collect more labels which will reduce the standard error in both tests. In Appendix B.2 we\npresent some examples of conversation histories and the responses generated by the bots before anc\nafter applying BPG. We qualitatively discuss specific kinds of issues that we were able to overcome\nvia reinforcement learning."}, {"section_index": "7", "section_name": "5 CONCLUSION", "section_text": "We presented a policy gradient method for batch reinforcement learning to train chatbots. The data\nto this algorithm are input-output sequences generated using other chatbots/humans and stochastic\nrewards for each output in the dataset. This setting arises in many applications, such as customet\nservice systems, where there is usually an abundance of unlabelled data, but labels (rewards) are\nexpensive to obtain and can be noisy. Our algorithm is able to efficiently use minimal labelled data\nto improve chatbots previously trained through maximum likelihood on unlabelled data. While out\nmethod draws its ideas from previous policy gradient work in the RL and NLP literature, there are\nsome important distinctions that contribute to its success in the settings of interest for this work.\nVia importance sampling we ensure that the probability of an action is properly accounted for in\noff-policy updates. By explicitly working in the batch setting, we are able to use knowledge of\nfuture actions and rewards to converge faster to the optimum. Further, we use the unlabelled data\nto initialise our method and also learn a reasonable behaviour policy. Our method outperforms\nbaselines on a series of synthetic and real experiments.\nThe ideas presented in this work extend beyond chatbots. They can be used in applications such\nas question answering, generating image descriptions and machine translation where an output sen-\ntence generated by a policy is scored by a human labeller to provide a weak supervision signal."}, {"section_index": "8", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We would like to thank Christoph Dann for the helpful conversations and Michael Armstrong for\nhelping us with the Amazon Mechanical Turk experiments."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251\u2014276, 1998.\nTable 1: The results on the Mechanical Turk experiments using the restaurant dataset. The first two columns\nare the mean labels of all responses before and after applying BPG on the bots initialised via maximum like-\nlihood. The last two columns are the p-values using a paired t-test and a paired Wilcoxon signed rank test\nFor both Bot-1 and Bot-2, we obtained 16,808 before and after responses scored by the same worker. Bot-2 is\nally significant at the 10% level on both tests while Bot-1 is significant on the Wilcoxon test.\nHamid Reza Maei. Gradient temporal-difference learning algorithms. University of Alberta, 2011.\nMarc\u2019 Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training witl\nrecurrent neural networks. arXiv preprint arXiv: 1511.06732, 2015.\nRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT pres:\nCambridge, 1998."}, {"section_index": "10", "section_name": "A IMPLEMENTATION OF GTD()\\)", "section_text": "We present the details of the GTD(A) algorithm (Maei, 2011) to estimate a value function in Al-\ngorithm 2. However, while Maei (2011) give an on-line version we present the batch version\nhere where the future rewards of an episode are known. We use a parametrisation of the form\nV(s) = Ve(s) = o(\u20ac' d(s)) where \u20ac \u20ac R\u00ae is the parameter to be estimated. o(z) = 1/(1 +77) is\nthe logistic function.\nThe gradient and Hessian of Ve have the following forms,\nVeVe(s) = Ve(s)(1 \u2014 Ve(s))d(s), V2Ve(s) = Ve(s)(1 \u2014 Ve(s)) (1 \u2014 2Ve(s)) (8) 6(s) |\nThe Hessian product in step (d) of Algorithm 2 can be computed in O(d) time via,\nV2Ve(s) - w = |Ve(s)(1 \u2014 Ve(s))(1 \u2014 2Ve(s))(4(s) \"w)] o(s\neer esG ye MS NM\nFor each episode i = 1,...,n\ne Set rey ri, Gyr \u2014 0, Ghy1 \u2014 0\n\u00a9 pr \u2014 79(a0| 8) /q(al|s) fort =1,...,7.\nFor each time step in reverse t = T\u2122,..., 1:\n(a) ge pi((L - d)Ve (st) + Nord.)\n(b) ga a(a - NVEVe (st?) + dads.)\n(c) br \u2014 gh \u2014 Ve(s\\\u201d)\n(d) hn & ww eT ))V2PE(s'?) -w\n(e) Aw + Aw + tT (6 \u2014 wl VeVels s\\)) VeVe(s'?)\n(8) AE AEF phy (VEN (st) \u2014 ah VeVe(st?) \u2014 hey\nwe\u2014wta\u2019Aw.\nCe f\u00a3+ta' AF."}, {"section_index": "11", "section_name": "3 ADDENDUM TO EXPERIMENTS", "section_text": "Given an input and output sequence, we used the average of five Bernoulli rewards Bern(r), where\nthe parameter r was r = 0.75 x r, + 0.25 x rp. Here r,; was the fraction of common words in the\ninput and output sequences while r; = 0.01\u201d where p, is the fraction of forbidden words in the\ndataset. As the forbidden words, we used the 50 most common words in the dataset. So if an input\nThe algorithm requires two step sizes a\u2019, a\u2019 below for the updates to \u20ac and the ancillary parameter\nw. Following the recommendations in Borkar (1997), we use a\u201d < a. In our implementations,\nwe used a\u2019 = 10\u00b0 and a\u201d = 10~\u00b0. When we run BPG, we perform steps (a)-(f) of Algorithm 2\nin step (iii) of Algorithm 1 and the last two update steps of Algorithm 2 in the last update step of\nAlgorithm 1.\nV2Ve(s) \u201cw= [Vel s)(1\u2014 Ve(s s))(1 \u2014 Wels ))(O(s) Tw)] as\n\nAlgorithm 2 GTD())\n\nGiven: Data {(x;, yi, ri) }%4, step sizes a\u2019, a\u201d, return coefficient A, initial \u00a3.\n\u2014 SetE+ &h,w + 0.\n\u2014 For each epoch k = 1,2,...\n> Set AE + 0, Aw + 0.\n> For each episode i = 1,...,n\ne Set rey HT, 9P41 0, ay \u2014 0\n\u00a9 pr \u2014 79(a0| 8) /q(al|s) fort =1,...,7.\nFor each time step in reverse t = T\u2122,..., 1:\n(a) gh pr((1\u2014 d)Ve (st) + Nord.)\n(b) ge pr((1- AV_Ve(s ty) + daha)\n(\u00a9) & \u2014 gh ~ Ves\u201d)\n(d) hy & (5, \u2014 wl VeVe(s'?)) V2 0 (80?) - w\n(e) Aw + Aw + rev - w! VeVe(s iy ))VeVe(s'?)\n() AEH AE + hy (He VEE (51?) \u2014 hw\" VePe(st\u201d) \u2014 he)\nrwewta\u2019Aw.\nre Ee Eta\u2018.\nhad 10 words of which 2 were forbidden, an output sequence repeating 7 of the allowed words anc\n1 forbidden word would receive an expected score of 0.75 x (8/10) + 0.25 x 0.01@/8) = 0.7406.\nThe training and testing set for reinforcement learning were obtained as follows. We trainec\n4 bots using maximum likelihood on 6000 input output sequences as indicated in Section 4.1\nThe LSTM hidden state size H and word embedding size E for the 4 bots were, (H,E) =\n(256, 128), (128, 64), (64, 32), (32, 16). The vocabulary size was |.A| = 12000. We used these bot:\nto generate outputs for 500 different input sequences each. This collection of input and output pair:\nwas scored stochastically as described above to produce a pool of 2000 input-output-score triplets\nFrom this pool we use a fixed set of 500 triplets for testing across all our experiments. From the\nremaining 1500 data points, we randomly select 500 for training for each execution of an algorithm\nFor all RL algorithms, we used an LSTM with 16 layers and 16 dimensional word embeddings.\nWe collected the initial batch of training data for RL as follows: We trained, via maximum likelihood\non 6000 conversations, five RNN bots whose LSTM hidden size H and word embedding size EF were\n(H, E) = (512,512), (256, 256), (128, 128), (512, 256), (256,64). The inputs x were all words\nfrom the history of the conversation truncated at length 64, i.e. the most recent 64 words in the\nconversation history. The outputs were the actual responses of the agent which were truncated to\nlength 64. As the vocabulary we use the |.A| = 4000 most commonly occurring words in the dataset\nand replace the rest with an <UNK> token.\nUsing the bots trained this way we generate responses on 1216 separate conversations. This dat\nwas sent to AMT workers who were asked to label the conversations on the following scale."}, {"section_index": "12", "section_name": "SOME QUALITATIVE RESULTS", "section_text": "In Tables 2 and 3 we have presented some examples. The text in black/grey shows the conversation\nhistory, the response in blue is by the bot trained via maximum likelihood (ML) alone and in red is\nthe bot after improvement using our BPG reinforcement learning algorithm.\nThe first two examples of Table 2 present examples where the ML algorithm repeated generic ques\ntions (on budget, group size etc.) even though they had already been answered previously. Afte\napplying BPG, we are able to correct such issues, even though there are some grammatical errors\nIn the second, third and fourth example, we see that the ML+BPG bot is able to take context int\nconsideration well when responding. For example, the customer asks for oriental/Mexican/Italia\nfood. While the ML bot doesn\u2019t take this into consideration, the ML+BPG bot is able to provid\nrelevant answers. However, in the third example, the name of the restaurant suggests that the foo\nmight be Indian and not Mexican. In the final example of Table 2 the customer asks a direct ques\ntion about smoking. The ML bot provides an irrelevant answer where as the ML+BPG bot directl:\nresponds to the question.\ne 2: The response is coherent and appropriate given the history and advances the conversation\nforward.\n\ne 1: The response has some minor flaws but is discernible and appropriate.\n\ne 0: The response is either completely incoherent or inappropriate and fails to advance the\nconversation forward.\nIn some examples, the ML bot had a tendency to produce sentences that were grammatically correct\nbut nonsensical, sensible but grammatically incorrect, or just complete gibberish. We were able to\ncorrect such issues via RL. The first three examples of Table 3 present such cases. Occasionally the\nopposite happened. The last example of Table 3 is one such instance.\nCustomer: Hi there, I\u2019m looking for a restaurant recommendation.\nAgent: How many in your party?\nCustomer: 5, including children.\n\nML: How many people will be joining you? [0]\nML+BPG: That is a fun size, and what is your budget joining you? [1.5]\nCustomer: Hey... Sorry, it\u2019s the anniversary of my wife\u2019s death. I always do\noriental tonight.\n\nAgent: I\u2019m sorry to hear that. I will do my best to find just what you need.\nCustomer: Thank you very much.\n\nAgent: What kind of budget are you looking at?\n\nCustomer: Doesn\u2019t matter... $100 sounds like a good number.\n\nAgent: Will it be just you, or are others going to join you?\n\nCustomer: It\u2019s just me tonight, but if you are off soon I wouldn\u2019t mind a\nfriendly ear. No pressure.\n\nML: What type of budget are you looking to stay within? [0.33]\nML+BPG: How about: Caf\u00e9 Hungry <UNK> ? They serve sea American to oriental\non site such? [1.33]\n\nCustomer: I would love to find a great Mexican restaurant!\nAgent: Great, and will anybody be joining you?\nCustomer: There will be a total of 3 of us.\n\nCustomer: Could you find me a place with outdoor seating?\nAgent: How about: The Painted Burrito. They serve Mexican food.\nCustomer: Eh, I\u2019ve heard that one isn\u2019t great. Can you please suggest another?\n\nML: How about: The Hungry .w seafood restaurant? they serve sea and\nAmerican food. [0.33]\nML+BPG: I am happy to, one moment. How about: Diva Indian Bistro Boston\nbar? They serve Mexican food. [1.67]\n\nCustomer: Hello, I\u2019m looking for a restaurant.\n\nAgent: I can help! Please tell me what kind of foods you most enjoy. I see\nyou are in the Cambridge area.\n\nCustomer: I would like a restaurant that serves Italian food.\n\nAgent: Are you looking for something a little less expensive or are you\nlooking for a high end restaurant?\n\nCustomer: I am within a budget, 20 to 50 dollars.\n\nAgent: I highly recommend The Redhouse Restaurant. It serves\n\nCustomer: Is there another restaurant? the restaurant would also need to host a\nparty size of 5 people.\n\nML: How about: Amuse? They serve French and American food. [0]\nML+BPG: How about: Ocean prime - Boston Seaport? They serve sea, American\nand Italian food. [2]\nAgent: How about Cafe at Taj Boston?\n\nCustomer: Oh, I just ate there last night. Anything else?\n\nAgent: Try Mela Restaurant at 578 Tremont St. Anything else I can help you\n\nwith today?\nCustomer: That sounds nice. Do they allow smoking?\n\nML: This sure is within your budget for all. [0.5]\nML+BPG: No. [1.5]\n\nT1112. 9.\nTable 2: Some qualitative results on the AMT experiment using restaurant data. In each cell, the text ir\nblack/grey show the history of a conversation between the customer and a human agent. The dots ... mean\nthat the part of the conversation history is hidden. The blue response is by the bot trained using maximum\nlikelihood (ML) alone. The red response is by the bot improved with BPG after ML. The numbers on the right\nof the blue/red responses are the average scores each response received from 2-3 AMT workers.\nTable 3: Qualitative results on the AMT experiment using restaurant data, continued. See caption unde\nTable 2 for more detail:\nAgent:\n\nHow may I help you?\n\nCustomer: I\u2019d like to find a French restaurant.\n\nAgent: Do you have a budget in mind?\n\nCustomer: Between $50 and $75.\n\nAgent: How many will be in your party?\n\nCustomer: Just me.\n\nML: And how will you be dining? [0]\nML+BPG: How will you be traveling to the restaurant? [2]\nAgent: Hello, how may I help you?\n\nCustomer: We are kinda hungry, any suggestion in mind?\n\nAgent: What type of cuisine do you like?\n\nCustomer: I want them Americano food.\n\nML: Very evening. Will you be dining alone? [1]\nML+BPG: Excellent choice. How many people will be in your party? [2]\nCustomer: I\u2019d like to find a good local restaurant.\n\nAgent: What type of cuisine would you like?\n\nCustomer: I have a hankering for Indian curry.\n\nAgent: Will you be needing parking?\n\nCustomer: We\u20191ll be driving altogether, so yes.\n\nML: Thank you for 1. How about: T. w your curry this place should be\ndining? [0]\nML+BPG: How about: The Maharaja? They serve Indian and American food. [2]\nAgent: How can I help you today?\n\nCustomer: I would like a restaurant recommendation.\n\nAgent: Okay what type of resturant?\n\nCustomer: Chinese would be great.\n\nAgent: Who will be dining, how will you be travelling?\n\nCustomer: Just me and by bus.\n\nML: Okay, thanks. Do you have a budget in mind? [2]\nML+BPG: Do you want a wonderful place evening, or whats your party adults,\n\nand a place.\n\n{0}"}]
rywUcQogx
[{"section_index": "0", "section_name": "DIFFERENTIABLE CANONICAL CORRELATION\nANALYSIS", "section_text": "Matthias Dorfer\nDepartment of Computational Perception\nJohannes Kepler University Linz\nLinz, 4040, Austria"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep Canonical Correlation Analysis (DCCA) (Andrew et al., 2013) is a non-linear extension o\nclassic Canonical Correlation Analysis (CCA) (Hotelling, 1936) that learns highly correlated laten\nrepresentations on top of two different neural networks. The central idea of our work is to exten\nthis formulation and cast CCA as a fully differentiable neural network layer which allows for param\neter optimization via back-propagation through the CCA projection matrices. This is in contrast t\nDCCA, where correlation analysis is the topmost part of the network and only used as an optimiza\ntion target for maximizing the correlation between the respective views. DCCA in general gaine\na lot of attention recently. It inspired related methods such as Deep Linear Discriminant Analysi\n(Dorfer et al., 2015) as well as a discriminative re-formulation of DCCA (Elmadany et al., 2016\napplied to improve speech-based emotion recognition. Wang et al. (2015a) show that joint optimiza\ntion of correlation and reconstruction error in auto-encoder configurations is successfully used fo\nrepresentation learning on a multi-modal speech production dataset. We take this as a motivation t\nevolve and extend the applicability of DCCA.\nIn our experiments, we employ the proposed differentiable CCA layer in a cross-modality retrieva\nsetup. Cross-modality retrieval is the task of retrieving relevant data of another type when a sampl\nof a different modality is given as a search query. A recent survey by Wang et al. (2016) categorize:\nthe task into binary and real-valued representation learning. In the case of real-valued representatiot\nlearning, End-to-End DCCA (Yan & Mikolajcezyk, 2015) achieves state of the art retrieval result\nin combination with retrieval by cosine distance computation. With differentiable CCA, it become:\npossible to train the networks to directly minimize the objective which will be used for retrieva\n(e.g., the cosine distance), while still benefitting from the optimally-correlated projections obtainec\nby CCA. Results on two publicly available datasets (Flickr30k (Young et al., 2014), IAPR TC-1:\nJan Schliiter\ngan schiuter\n\nThe Austrian Research Institut\nfor Artificial Intelligence\nVienna, 1010, Austria\n\nSan enhliictaranftai at"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Canonical Correlation Analysis (CCA) computes maximally-correlated linear pro-\njections of two modalities. We propose Differentiable CCA, a formulation of CCA\nthat can be cast as a layer within a multi-view neural network. Unlike Deep CCA,\nan earlier extension of CCA to nonlinear projections, our formulation enables\ngradient flow through the computation of the CCA projection matrices, and free\nchoice of the final optimization target. We show the effectiveness of this approach\nin cross-modality retrieval experiments on two public image-to-text datasets, sur-\npassing both Deep CCA and a multi-view network with freely-learned projections.\nWe assume that Differentiable CCA could be a useful building block for many\nmulti-modality tasks.\nIn this section, we review the concepts of classical and deep Canonical Correlation Analysis, the\nbasis for the methodology proposed in this work.\ncorr(A*\u2019x, B*\u2019y) = > d;\n\ni<k\nHere, r is a regularization parameter ensuring the matrices are positive definite. Substituting these\nestimates for U,.., Ug. and X,,,, respectively, we can estimate A* and B* using Equation 3."}, {"section_index": "3", "section_name": "2.2 DEEP CANONICAL CORRELATION ANALYSIS (DCCA)", "section_text": "Andrew et al. (2013) propose an extension of CCA that allows learning parametric nonlinear trans-\nformations of two variables maximizing the cross-correlation after optimal projection. Specifically,\nlet a \u20ac R% and b \u20ac R\u00ae denote two random vectors, and let x = f(a;O,) and y = g(b;0,)\ndenote their nonlinear transformations, parameterized by Oy and O,. For example, f and g could\nbe feed-forward neural networks. As before, Equation 3 gives the linear transformations of x and y\noptimizing the CCA objective in Equation 2. Deep CCA optimizes \u00a9, and \u00a9, to further increase\nthe cross-correlation. For d, = dy = k, the CCA objective is equal to the sum of all singular values\nof T (Equation 4), which is equal to its trace norm:\nThe remainder of our paper is structured as follows. In Section 2, we review classic and deep\nCCA, which are the basis for the differentiable CCA layer proposed in Section 3. In Section 4, we\nshow results of an experimental evaluation in a cross-modiality retrieval setting and provide further\ninvestigations on the representations learned by our networks. Finally, Section 5 concludes the\npaper.\nLet x \u20ac R\u00ae and ye Ry denote two random vectors with covariances \u00a9... and Xyy and cross-\ncovariance S,y. The objective of CCA is to find two matrices A* \u20ac R\u201c=** and B* \u20ac R4y** (with\nk <d, and k < d,,) that project x and y into a common space maximizing their cross-correlation:\n(A*, B*) = arg max corr(A\u2019x, B\u2019y)\nA.B\n(A*, B*) = arg max A'S 2yB\nA/S, A=B/DyyB=I\nA= PUy, Bay?\nIn practice, the covariances and cross-covariance of x and y are usually not known, but estimated\nfrom a training set of m paired vectors, expressed as matrices X \u20ac R\u00a2#*\u2122,Y \u20ac R\u00e9v*\u2122:\nil\n\nm\n\nX1\n\n|\n\n=Y-\u2014\n\nm\ncorr(f(a; 9), g(b; O,)) = corr(x, y) = ||T] |e = tr(T\u2019T)'/*\nA 4\nflay UO OC OO OOOOS gh)vt\n\nTrace Norm Objective\n\nx=f(a)(O OO OO} OOOO Clyzgtb) xzfla)[SOOOO OO 06 Olysgtb)\n(\u00a9960666] \u00a9000066 00000\nBoCsees) Sootecs @000000\n(\u00a9060006) _\u2014_ eeuseus\n\n: a\n\nfa\\ NOCCA Lh\\ COA T aver\neer \u2014\u2014o\n\nr\n\nr\nflay U OOO OO @@e\n\nOO} gtb) Vv\"\n\nTrace Norm Objective\n\nr\nxzfla[0 0 0 00] DOO Oly=ah) xefla)\n\n(\u00a9900000) (Coe0eee O06\n\n(\u00a9GG5000) SOCCGCSE feXs)\n\nLe\n\n(\u00a98600080) \u00a9OG0G66 @\n\n4 -\n(a) DCCA (b) CCA Layer\nFigure 1: Comparison of DCCA and the prosed differentiable CCA layer. DCCA optimizes th\u00e9\ncorrelation of the two different views and is therefore the topmost part of the network. In contrast\nour CCA layer establishes gradient flow over the CCA computation. This allows us to use th\nprojection output of CCA as input for subsequent components in a multi-view network (e.g., \u2018\nretrieval objective such as cosine distance).\nAndrew et al. (2013) show how to compute the gradient of this Trace Norm Objective (TNO) with\nrespect to x and y. Assuming f and g are differentiable with respect to Oy and \u00a9, (as is the\ncase for neural networks), this allows to optimize the nonlinear transformations via a gradient-based\nmethod. Figure 1a shows a schematic sketch of DCCA, as a fixed objective backpropagated through\ntwo neural networks.\nIn this section, we further extend DCCA to allow not only an arbitrary nonlinear transformation o!\nthe inputs, but also arbitrary transformations of (or objectives on) the projected vectors. This allows\nCCA to be used as a building block within a multi-modality neural network, instead of as a fina\nobjective only. In the following, we will discuss how to enable backpropagation through CCA, whai\nto consider when doing stochastic updates, and how to apply it for cross-modality retrieval."}, {"section_index": "4", "section_name": "3.1 GRADIENT OF CCA", "section_text": "For our differentiable CCA, we instead need the gradients of the projected data A*\u2019x and B*\u2019y wrt.\nx and y, which require a and aS: We could again decompose this into the gradients wrt. T,\nthe gradients of T wrt. U2, LU, and \u201cyy, and the gradients of those wrt. x and y. However, while\nthe gradients of U and V wrt. T are known (Papadopoulo & Lourakis, 2000), they involve solving\nO((dzdy)?) linear 2 x 2 systems. To arrive at a more practical implementation that does not require\nthe gradient of the SVD, we reformulate the solution to use two symmetric eigendecompositions\nTT\u2019 = Udiag(e)U\u2019 and T\u2019T = Vdiag(e)V\u2019 (Petersen & Pedersen, 2012, Eq. 270). This gives\nus the same left and right eigenvectors we would obtain from the SVD (save for possibly flipped\nsigns, which are easy to fix), along with the squared singular values (e; = d?). The gradients of\neigenvectors of svmmetric real eigensvstems have a simple form (Magnus. 1985. Ea. 7) and both\nAs mentioned above, we can compute the canonical correlation along with the optimal projection\n\nmatrices from the singular value decomposition T = ce Sy Sy ye = Udiag(d)V\u2019. Specifi-\ncally, we obtain the correlation as )>; d;, and projections as A* = oe ?U and B* = Dy Vv. For\n\nDCCA, it suffices to compute the gradient of the total correlation wrt. x and y in order to backprop-\nagate it through the two networks f and g. Using the chain rule, Andrew et al. (2013) decompose\nthis into the gradients of the total correlation wrt. Uy, L., and Xy,, and the gradients of those wrt.\nx and y. Their derivations of the former make use of the fact that both the gradient of )>, d; wrt. T\nand the gradient of ||'T||,; (the trace norm objective in Equation 7) wrt. T\u2019T have a simple form; see\nAndrew et al. (2013, Sec. 7) for details.\nTT\u2019 and T\u2019T are differentiable wrt. x and y, enabling a sufficiently efficient implementation in :\ngraph-based, auto-differentiating math compiler such as Theano (Theano Development Team, 2016)"}, {"section_index": "5", "section_name": "3.2 STOCHASTIC OPTIMIZATION", "section_text": "For classical CCA, Uae, Ney and yy are estimated from a large set of m training examples (Equa-\ntion 6). In contrast, gradient-based optimization of neural networks usually estimates the gradients\nwrt. network parameters from mini-batches of n randomly drawn examples, with n < m. In Deep\nCCA as well as in our extension, the correlations are functions of the network parameters that we\nneed to backpropagate through, effectively enforcing m = n.\nAndrew et al. (2013) solve this discrepancy by optimizing the network parameters with L-BFGS o\nthe full training set, which is infeasible for very large datasets. Yan & Mikolajczyk (2015) instea\ntrain on small mini-batches, estimating correlation matrices of size 4096 x 4096 from 100 example\nonly, which seems risky. We will choose a way in between, training on large mini-batches to obtai\nstable estimates. This approach was also taken by Wang et al. (2015b, Sec. 5.1), who found mini\nbatches of 400-1000 examples to even outperform full-batch L-BFGS. In addition, for testing, w\noptionally re-estimate the correlation matrices (and the corresponding projection matrices) using\nlarger set of m > n examples.\nAnother tempting option is to train on small mini-batches, but use exponential moving average:\nupdated with each mini-batch as follows:\nYee + Ueeg(L-a)+ Yep gy Ley(l-\u2014a)+Yyya Lyy + Vyy(1\u2014 a) + Yyy"}, {"section_index": "6", "section_name": "3.3. CROSS-MODALITY RETRIEVAL WITH DIFFERENTIABLE CCA", "section_text": "DCCA maximizes the correlation between the latent representations of two different neural net-\nworks. When the two network inputs a and b represent different views of an entity (e.g., an image\nand its textual description), DCCA projects them into a common space where they are highly cor-\nrelated. This can be exploited for cross-modality retrieval: Projecting one modality of an entity, we\ncan find the best-matching representations of the second modality (e.g., an image for a textual de-\nscription, or vice versa). To find the best matches, a common option is to compute nearest neighbors\nin terms of cosine distance (Yan & Mikolajczyk, 2015), which is closely related to correlation.\nGiven the methodology introduced above, we now have the means to optimize DCCA projection:\ndirectly for the task at hand. In Figure 1b, we show a possible setting where we put the differentiabl\nCCA layer on top of a multi-view network. Instead of optimizing the networks to maximize th\u00e9\ncorrelation of the projected views (the TNO), we can optimize the networks towards a task-specific\nobjective and still benefit from the optimality of the CCA projections.\nFor this work, we optimize towards minimal cosine distance between the correlated views, the very\nmetric used for retrieval. In the next section, we empirically show that this is indeed beneficial in\nterms of quantitative retrieval performance as well as convergence speed of network training."}, {"section_index": "7", "section_name": "4 EXPERIMENTS", "section_text": "We evaluate our approach in cross-modality retrieval experiments on two publicly available dataset:\n(also considered by Yan & Mikolajczyk (2015)) and provide investigations on the representation:\nlearned by the network."}, {"section_index": "8", "section_name": "4.1 EXPERIMENTAL SETUP", "section_text": "For the evaluation of our approach, we consider Flickr30k and IAPR TC-12, two publicly available\ndatasets for cross-modality retrieval. Flickr30k consists of image-caption pairs, where each image\nWith proper initialization and a sufficiently small coefficient a, this gives stable estimates even for\nsmall n. However, since only the estimates from the current mini-batch Saws Sry and Syy can be\npractically considered in backpropagation, this changes the learning dynamics: For too small a,\nthe projection matrices will be virtually degraded to constants. Empirically, we found that large\nmini-batches perform slightly better than small batches with moving averages (see Appendix B).\nTable 1: Example images for Flickr30k (top) and IAPR TC-12 (bottom)\ns annotated with five different textual descriptions. The train-validation-test split for Flickr30I\ns 28000-1000-1000. In terms of evaluation setup, we follow the related work and report result\nm two different evaluation protocols. Protocol pooled pools the five available captions into on\nconcatenated\" text, meaning that only one but richer text annotation remains per image. This i\nlone for all three sets. Protocol 5 captions pools only the captions of the train set and keeps fiv\neparate annotations for validation and test set. The IAPR TC-12 dataset contains 20000 natura\nmages where only one \u2014 but compared to Flickr30k more detailed \u2014 caption is available for eacl\nmage. As no predefined train-validation-test split is provided, we randomly select 2000 images fo\nesting, 1000 for validation and keep the rest for training. Yan & Mikolajczyk (2015) also use 200\nmages for testing, but did not explicitly mention hold out images for validation. Table | shows a1\n\u2018xample image along with its corresponding captions or caption for either dataset.\nThe task at hand for both datasets is to retrieve the correct counterpart \u2014 either text or image \u2014\nwhen given a query element of the other modality. We follow Yan & Mikolajczyk (2015) and use\nthe cosine distance for retrieval in the projection space. As evaluation measures we consider the\nRecall@k (R@k) as well as the Median Rank (MR) and the Mean Average Precision (MAP). The\nR@k rate (high is better) is the ratio of queries which have the correct corresponding counterpart in\nthe first & retrieval results. The MR is the median position (low is better) of the target in a similarity-\nordered list of available candidates. Finally, we define the MAP (high is better) as the mean value of\n1/Rank over all queries.\nThe input to our networks is a 4096-dimensional image feature vector along with a correspond-\ning text vector representation (5793 for Flickr30k, 2048 for IAPR TC-12). In terms of text pre-\nprocessing, we follow Yan & Mikolajczyk (2015), tokenizing and lemmatizing the raw captions\nas the first step. Based on the lemmatized captions, we compute /2-normalized TF/IDF-vectors,\nomitting words with an overall occurrence smaller than 5 times for Flickr30k and 3 times for IAPR\nTC-12, respectively. The image represenations are computed from the last hidden layer of a network\npretrained on ImageNet (layer fc7 of CNN S by Chatfield et al. (2014))."}, {"section_index": "9", "section_name": "4.2 NETWORK ARCHITECTURES AND OPTIMIZATION DETAILS", "section_text": "We feed 4096-dimensional image vectors along with the corresponding text representation into ou\nnetworks. The image representation is followed by a linear dense layer with 128 units (this will als\nbe the dimensionality k = 128 of the resulting CCA retrieval space). The text vector is processe\nby two batch-normalized (Ioffe & Szegedy, 2015) dense layers of 1024 units each and an ELI\nactivation function (Clevert et al., 2015). As a last layer for the text representation network, w\nagain apply a dense layer with 128 linear units. For a fair comparison, we keep the structure (an\nnumber of parameters) of all networks in our experiments the same. The only parameters that var\nare the objectives and the corresponding optimization/regularization strategies. In particular, w\napply a grid search on the respective hyper-parameters and report the best results for each method\nOptimization is performed either using Stochastic Gradient Descent (SGD) with momentum or b\u2019\nthe adam (Kingma & Ba, 2014) update rule.\nA man in a white cowboy hat reclines in front of a window in an airport.\n\\ young man rests on an airport seat with a cowboy hat over his face.\nA Man Is sleeping inside on a bench with his hat over Nis eyes.\nsleeping at an airport with a hat on their head.\na green and brown embankment with brown houses on the right and a light\norown sandy beach at the dark blue sea on the left; a dark mountain range\nyehind it and white clouds in a light blue sky in the background;\nTable 2: Cross-modality retrieval results on Flickr30k. \u201cE2E-DCCA\u201d is taken from Yan & Mikola-\njezyk (2015), all other results are our own. Methods marked with \"*\" re-estimate projection matrices\nfrom a larger batch than used during training (10,000 training examples), see Section 3.2.\nImage-to-Text\n\nText-to-Image\n\nProtocol Method R@1 R@5 R@10 MR|R@1 R@5 R@10 MR\nE2E-DCCA 279 56.9 68.2 4 26.8 52.9 66.9 4\nTNO* 29.9 57.9 67.9 4 21.8 48.1 64.0 6\nlearned-cos? 90 23.3 32.8 28 8.5 23.3 32.8 26\npooled CCAL-I2 18.2 42.0 53.6 9 17.7) 42.2 53.2 9\nCCAL-cos 28.9 57.5 69.1 4 25.1 53.1 66.4 5\nCCAL-cos? 30.7 58.8 70.1 4 28.0 56.2 68.3 4\nCCAL-cos?* 34.1 60.0 ~\u2014- 70.6 3.5 | 29.2 58.3 69.7 4\nE2E-DCCA 16.7 39.3 52.9 8 126 310 43.0 15\n5 captions TNO* 17.5 39.3 51.4 10 ) 134 31.7 41.3 19\nCCAL-cos? 21.2 444 55.8 8 149 35.9 47.5 12\nCCAL-cos?* 20.6 45.9 57.2 7 15.6 37.0 49.4 11\nAs optimization targets, we consider the following candidates: (1) The Trace Norm Objective (TNO)\nas our base line for cross-modality retrieval (Yan & Mikolajcezyk, 2015). (2) The proposed differ-\nentiable CCA layer in combination with the objectives cosine distance (CCAL-cos), squared cosine\ndistance (CCAL-cos?) and euclidean distance (CCAL-/2). As an additional setting, we consider a\nfreely-learnable projection layer where the projection matrices A and B are randomly initialized\nweights that can be optimized by the network using SGD in the conventional way. This allows to\nassess the benefit of using CCA-derived projections within a multi-view network under otherwise\nunchanged objectives. For this experiment, we optimize for the squared cosine distance and denote\nthe setting by learned-cos?. The batch size is set to 1000 samples to allow stable covariance esti-\nmates for the CCA (Section 3.2). For further stabilization, we regularize the covariance matrices\n(Andrew et al., 2013) by adding scaled (r = 10-3) identity matrices to the estimates D,~, Sy, and\nT (Section 2.1). The variants based on differentiable CCA are additionally regularized by L2 weight\ndecay. No dropout is used in this settings as it harmed optimization in our experiments. When opti-\nmizing with the TNO we follow Yan & Mikolajczyk (2015) and use dropout (p = 0.5) after the first\ntwo dense layers of the text network. In Table 4 in Appendix A we provide the optimization settings\nfor all configurations in detail, found using a grid search optimizing MAP on the validation set."}, {"section_index": "10", "section_name": "4.3 EXPERIMENTAL RESULTS ON CROSS-MODALITY RETRIEVAL", "section_text": "Table 2 lists our results on Flickr30k. Along with our experiments, we also show the results re-\nported in (Yan & Mikolajczyk, 2015) as a reference (E2ZE-DCCA). However, a direct comparison\nto our results may not be fair: E2E-DCCA uses a different ImageNet-pretrained network for the\nimage representation, and finetunes this network while we keep it fixed (as we are only interested\nin comparing differentiable CCA to alternatives, not in obtaining the best possible results). Our\nTNO results use the same objective as E2E-DCCA, but our network architecture, permitting direct\ncomparison.\nWhen comparing the performance of our networks, we observe a gain both for image-to-text an\next-to-image retrieval when training with the CCAL-cos? objective compared to TNO (e.g., R@\nof 34.1 compared to 29.9 under protocol pooled). This indicates that training a network directl.\non the objective used for retrieval (using differentiable CCA) is a reasonable design choice. /\ncloser look at the results also reveals that the squared cosine distance is superior compared to th\n\u2018emaining objectives. We further observe that the randomly initialized projection matrices learne:\nsntirely by SGD (learned-cos\u201d) show poor performance compared to their CCA counterpart (eve!\nhough in theory, they could converge to exactly the same solution). This suggests that exploiting th\noeneficial properties of the CCA projections directly within a network during training is a powerfu\n\u2018ool, supporting optimization of related objectives. CCAL-/2 for example performs poorer than th\nvariants including cosine losses but still better than the version with learned weights. On protoco\nTable 3: Cross-modality retrieval results on IAPR TC-12\nImage-to-Text\n\nText-to-Image\n\nMethod R@1l R@5 MAP MR|R@I R@5 MAP MR\nE2E-DCCA 30.2 57.0 0.426 29.55 60.0 0.415\nTNO* 30.0 56.7 0.424 4 28.0 554 0410 5\nCCAL-cos?* 31.1 584 0439 4 26.8 55.1 0.403 4\n\n* \u2014\u2014 TNO\n\nMAP (1/ Rank)\n\n7 ~ aed\n3) now)\noo U/ Vaal \u2014\u2014_1NO (va)\n= cont(t NO, |\u201c \u2014\u2014 CcAL-cos? (t)\n= conical 035 on \u2014\u2014 CCAL-cos? (va)\n03000\n\no 1 20 % 4 8 6 70\nEpoch\n\n(a) Evolution of correlation (train)\nand cosine distance (validation)\n\n(\n\no 10 2 2% 4 80 60 70\n\nEpoch\n\n) MAP over training epochs\n\n0 1 2 % 4 50 6\nCorrelation Coefficient\n\n(c) Individual Correlations\nFigure 2: Comparison of the TNO and CCAL-cos? based on the total amount of canonical correla-\ntion (sum over singular values d) as well as the cosine distance between corresponding samples.\n5 captions, we only report the best results (CCAL-cos\u201d) along with the TNO and observe similar\ntendencies. Note that there are various other methods reporting results on Flickr30k (Karpathy et al.,\n2014; Socher et al., 2014; Mao et al., 2014; Kiros et al., 2014) which partly surpass ours, for example\nby using more elaborate processing of the textual descriptions. We omit these results as we focus on\nthe comparison of DCCA with the proposed differentiable CCA layer.\nIn Table 3, we list our results on the IAPR TC-12 dataset. We again show the retrieval performances\nof Yan & Mikolajczyk (2015) as a baseline (again with limited comparability, due to a different ar-\nchitecture and a different train-validation-test split), along with our implementation of the TNO and\nthe CCA layer trained with squared cosine distance. For image-to-text retrieval, we achieve slightly\nbetter retrieval performances when training with cosine distance and propagating the gradients back\nthrough the differentiable CCA layer. For the other direction, results are slightly worse."}, {"section_index": "11", "section_name": "4.4 INVESTIGATIONS ON LEARNED REPRESENTATIONS", "section_text": "In this section, we provide a more detailed look at the learned representations. We compare the\nrepresentations learned with the TNO to the proposed CCA layer optimized with the squared cosin\u00ab\ndistance objective. For easier comparison, we re-train both networks with a reduced projectior\ndimensionality of h = 64 \u2014 otherwise, the TNO takes much longer to converge than the CCA layer\nThis results in slightly decreased performance for both. but the relative tendences are preserved.\nFigure 2a shows the evolution of the mean correlation (mean over singular values with maximum\n1.0) on the training set during optimization. Allong with the correlation, we also plot the average\ncosine distance between corresponding pairs on the validation set. As expected, for the TNO we\nobserve a continous decrease of cosine distance when the correlation increases. Interestingly, this\nis not the case for CCAL-cos?. The result suggests that the network found a way of minimizing\nthe cosine distance other than by increasing correlation between the representations \u2014 the latter ever\ndecreases after a few training epochs. In Figure 2b, we plot the corresponding evolution of MAF\non the training and validation set, confirming that the decreased cosine distance indeed also lead:\nto improved retrieval performance. Finally, in Figure 2c we compare the individual correlatior\ncoefficients (magnitudes of CCA singular values on the training set) of both representations after the\nlast training epoch. This details the observation in Figure 2a: not only the total correlation, but alsc\nthe individual correlation coefficients are considerably higher when training with TNO, even thougt\nthe retrieval performance is lower."}, {"section_index": "12", "section_name": "5 CONCLUSION", "section_text": "We presented a fully differentiable version of Canonical Correlation Analysis which enables us\nto back-propagate errors directly through the computation of CCA. As this requires to establish\ngradient flow through CCA, we formulate it to allow easy computation of the partial derivatives\n\u00a7& and ee of CCA\u2019s projection matrices A* and B* with respect to the input data x and y.\nWith this formulation, we can incorporate CCA as a building block within multi-modality neural\nnetworks that produces maximally-correlated projections of its inputs. In our experiments, we use\nthis building block within a cross-modality retrieval setting, optimizing a network to minimize the\ncosine distance of the correlated CCA projections. Experimental results show that when using the\ncosine distance for retrieval (as is common for correlated views), this is superior to optimizing a\nnetwork for maximally-correlated projections (as done in Deep CCA), or not using CCA at all. We\nfurther observed (Section 4.4) that it is not necessarily required to have maximum correlation to\nachieve a high retrieval performance. Finally, our differentiable CCA layer could provide a useful\nbasis for further research, e.g., as an intermediate processing step for learning binary cross-modality\nretrieval representations."}, {"section_index": "13", "section_name": "ACKNOWLEDGMENTS", "section_text": "The research reported in this paper has been supported by the Austrian Federal Ministry for Trans-\nport, Innovation and Technology, the Federal Ministry of Science, Research and Economy, and the\nProvince of Upper Austria in the frame of the COMET center SCCH, as well as by the Federal\nMinistry for Transport, Innovation & Technology (BMVIT) and the Austrian Science Fund (FWF):\nTRP 307-N23. The Tesla K40 used for this research was donated by the NVIDIA Corporation."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Matthias Dorfer, Rainer Kelz, and Gerhard Widmer. Deep linear discriminant analysis. International\nConference on Learning Representations (ICLR) (arXiv:1511.04707), 2015.\nHarold Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):32 1-377, 1936.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin\narXiv: 1412.6980, 2014.\nyalen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. Deep canonical correlation analysis.\nIn Denposdines nf the Internatinnal Canforenre nan Marhine learning nn 19171965 9012\nRyan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embedding:\nwith multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014.\nJan R. Magnus. On differentiating eigenvalues and eigenvectors. Econometric Theory, 1(2):179\n191, 1985. ISSN 02664666, 14694360.\nK.V. Mardia, J.T. Kent, and J.M. Bibby. Multivariate analysis. Probability and mathematical statis-\ntics. Academic Press, 1979. ISBN 9780124712508.\nTh\u00e9odore Papadopoulo and Manolis I.A. Lourakis. Estimating the Jacobian of the Singular Value\nDecomposition: Theory and Applications. In Proceedings of the 6th European Conference on\nComputer Vision (ECCV), 2000.\nK. B. Petersen and M. S. Pedersen. The matrix cookbook, nov 2012. Version 20121115\nTheano Development Team. Theano: A Python framework for fast computation of mathematical\nexpressions. arXiv e-prints, abs/1605.02688, May 2016.\nPeter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual\ndenotations: New similarity metrics for semantic inference over event descriptions. Transactions\nof the Association for Computational Linguistics, 2:67\u201478, 2014."}, {"section_index": "15", "section_name": "APPENDIX A: OPTIMIZATION SETTINGS", "section_text": "The table below provides a detailed listing of the optimization strategies for all our experiments. All\nour configurations are of course also available in our experimental code published at (will be added).\nTable 4: Details on optimization strategies for the respective networks\nJunhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan L Yuille. Explain images with multimodal\nrecurrent neural networks. arXiv preprint arXiv: 1410.1090, 2014.\nRichard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng.\nGrounded compositional semantics for finding and describing images with sentences. Trans-\nactions of the Association for Computational Lineuistics, 2:207\u2014218. 2014.\nKaiye Wang, Qiyue Yin, Wei Wang, Shu Wu, and Liang Wang. A comprehensive survey on cross-\nmodal retrieval. arXiv preprint arXiv: 1607.06215. 2016.\nDUCKIOUK\n\nObjective Optimizer Units rin; Ir-schedule Dropout D2 r\nTNO momentum 2048 0.05 constant 0.5 none 10\u00b0\nCCAL momentum 1024 0.5 x0.7 from epoch 10 none 0.002 1073\nlearned-cos? momentum 1024 0.25 none none 0.002 10-8\n\nIAPR TC-12\n\nObjective Optimizer Units Irini lr-schedule Dropout D2 r\nTNO adam 1024 0.001 x0.1 in epoch 30 none 0.0001 107%\nCCAL adam 1024 0.001 x0.1 in epoch 50 none 0.0002 1073\nFigure 3: Influence of parameter a."}, {"section_index": "16", "section_name": "APPENDIX B: INFLUENCE OF RUNNING AVERAGE STATISTICS", "section_text": "In this additional section, we investigate the influence of weighting coefficient a when using ex-\nponential moving average estimates of the covariance matrices for CCA computation (see Section\n3). A high a (close to 1.0) means that the averaged estimate of U,., Uy, and, mostly depends\non the current batch, and a low a (close to 0.0) means it more strongly depends on the history of\nprevious batches. To assess whether and under what circumstances exponential moving averages are\nhelpful, we run an additional experiment on the IAPR TC-12 dataset as follows: We re-train one of\nthe models of Section 4 both with batch size 1000 and with batch size 200, varying a from 1.0 to 0.1\nwith a step size of 0.1 and measuring the MAP achieved on the validation set. We run each setting\nthree times and report the average over the three runs. Figure 3 shows the results of this experiment.\nFor batch size 1000, we draw the same conclusion as was reported in (Wang et al., 2015a;b): If the\nbatch size is sufficiently large and representative for the entire population, learning on distribution\nparameters (in this case covariance matrices) is feasible, and the network performs best when trained\nwith an a close to one. This is not the case for batch size 200. In particular, the configurations with\na large a (small effective running average window) perform poorly. We conclude that a batch size\nof 200 is too small to obtain stable and representative covariances. However, when choosing a small\na, it is still possible to train the models and achieve reasonable retrieval performance. As a prac-\ntical recommendation, we suggest to use large batch sizes whenever possible (e.g., if feasible with\navailable hardware). If the batch size needs to be reduced (e.g., for very large models and limited\nmemory), using small alpha values still allows for training canonically correlated retrieval networks.\nFor this work, we use a batch size of 1000 and fix a = 1, disabling moving averages.\nMAP (1 / Rank)\nos 89 98 9\nBP oN \u00ae OB\n\n2\n\u00b0\n\n\u2014*\u2014 hatch size 200\n\u2014\u2014 batch size 1000\n\n\u00b0\n\u00b0\n\n0.2 0.4 0.6 0.8 10\nalpha"}]
HkuVu3ige
[{"section_index": "0", "section_name": "ON ORTHOGONALITY AND LEARNING RECURRENT\nNETWORKS WITH LONG TERM DEPENDENCIES", "section_text": "Eugene Vorontsov !7, Chiheb Trabelsi !?, Samuel Kadoury !\u201d, Chris Pal\nIt is well known that it is challenging to train deep neural networks and recur-\nrent neural networks for tasks that exhibit long term dependencies. The vanishing\nor exploding gradient problem is a well known issue associated with these chal-\nlenges. One approach to addressing vanishing and exploding gradients is to use\neither soft or hard constraints on weight matrices so as to encourage or enforce or-\nthogonality. Orthogonal matrices preserve gradient norm during backpropagation\nand can therefore be a desirable property; however, we find that hard constraints\non orthogonality can negatively affect the speed of convergence and model per-\nformance. This paper explores the issues of optimization convergence, speed and\ngradient stability using a variety of different methods for encouraging or enforcing\northogonality. In particular we propose a weight matrix factorization and parame-\nterization strategy through which we can bound matrix norms and therein control\nthe degree of expansivity induced during backpropagation."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The depth of deep neural networks confers representational power, but also makes model optimiza-\ntion more challenging. Training deep networks with gradient descent based methods is known to be\ndifficult as a consequence of the vanishing and exploding gradient problem (Hochreiter & Schmid-\nhuber| {1997). Typically, exploding gradients are avoided by clipping large gradients (Pascanu et al.|\nor introducing an Ly or L; weight norm penalty. The latter has the effect of bounding the\nspectral radius of the linear transformations, thus limiting the maximal gain across the transforma-\ntion. |Krueger & Memisevic (2015) attempt to stabilize the norm of propagating signals directly\nby penalizing differences in successive norm pairs in the forward pass and (2013)\n\npropose to penalize successive gradient norm pairs in the backward pass. These regularizers affect\nthe network parameterization with respect to the data instead of penalizing weights directly.\nBoth expansivity and contractivity of linear transformations can also be limited by more tightly\n\nbounding their spectra. By limiting the transformations to be orthogonal, their singular spectra are\n\nlimited to unitary gain causing the transformations to be norm-preserving. (2015) and\nhi\n\nneural network (RNN) models with transformations that are unitary by construction which they\nachieved by composing multiple basic unitary transformations. The resulting transformations, for\nsome n-dimensional input, cover only some subset of possible n x n unitary matrices but appear\nto perform well on simple tasks and have the benefit of having low complexity in memory and\ncomputation.\nThe entire set of possible unitary or orthogonal parameterizations forms the Stiefel manifold. At a\nmuch higher computational cost, gradient descent optimization directly along this manifold can be\ndone via geodesic steps (Nishimori| (2011). Recent work\nproposed the optimization of unitary matrices along the Stiefel manifold using geodesic gradient\ndescent. To produce a full-capacity parameterization for unitary matrices they use some insights"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In contrast, here we explore the optimization of real valued matrices within a configurable margin\nabout the Stiefel manifold. We suspect that a strong constraint of orthogonality limits the model\u2019s\nrepresentational power, hindering its performance, and may make optimization more difficult. We\nexplore this hypothesis empirically by employing a factorization technique that allows us to limit the\ndegree of deviation from the Stiefel manifold. While we use geodesic gradient descent, we simulta-\nneously update the singular spectra of our matrices along Euclidean steps, allowing optimization to\nstep away from the manifold while still curving about it.\nA neural network with n hidden layers has pre-activations\na;(hj_1) = W; hi_1 + bi, i \u20ac {2,--- ,n}\nThe partial derivatives for the pre-activations can be decomposed as follows:\n1orms of the non-linearity\u2019s Jacobian and transition matrix at time \u00a2 (layer 7), as follows:\nwhere Ap, and Aw, are the largest singular values of the non-linearity\u2019s Jacobian D; and the tran-\nsition matrix W,. In RNNs, W;, is shared across time and can be simply denoted as W.\nEquation[5]shows that the gradient can grow or shrink at each layer depending on the gain of each\nlayer\u2019s linear transformation W and the gain of the Jacobian D. The gain caused by each layet\nis magnified across all time steps or layers. It is easy to have extreme amplification in a recurrent\nneural network where W is shared across time steps and a non-unitary gain in W is amplified\nexponentially. The phenomena of extreme growth or contraction of the gradient across time steps o1\nlayers are known as the exploding and the vanishing gradient problems, respectively. It is sufficient\nfor RNNs to have 7, < 1 at each time t to enable the possibility of vanishing gradients, typically\nfor some large number of time steps 7. The rate at which a gradient (or forward signal) vanishes\nfrom [Tagare] (2011), combining the use of a canonical inner products and Cayley transformations.\nTheir experimental work indicates that full capacity unitary RNN models can solve the copy memory\nproblem whereas both LSTM networks and restricted capacity unitary RNN models having similar\ncomplexity appear unable to solve the task for a longer sequence length (7' = 2000).\nThe issue of vanishing and exploding gradients as it pertains to the parameterization of neural net-\nworks can be illuminated by looking at the gradient back-propagation chain through a network.\nFor notational convenience, we combine parameters W; and b, to form an affine matrix 8. We can\nsee that for some loss function Z at layer n, the derivative with respect to parameters 0, is:\nOL _ Oanyi OL\n00; 00; Danii\nGai41 \u2014 Gaj ON; Caj41\n00; 00; 0a; Oh;\n0a;\n\nOai+1\n\n= DiWi+1,\n\nD,W;\n30; i\n\n0a;\nOa; Oai4i\n\nD(Wi+1\n\n=D,;W;\n00; da; an\nwhere D; is the Jacobian corresponding to the activation function, containing partial derivatives of\nthe hidden units at layer i + 1 with respect to the pre-activation inputs. Typically, D is diagonal.\nFollowing the above, the gradient in equation 2] can be fully decomposed into a recursive chain of\nmatrix products:\nOL dat L\n= D,;W\n(Dj; W541) 0\n\n00; 00; i\nanti\nOaryi\nOat\n\nS ||Dzl| [Well < Av, Aw, = 7\nXp, Aw, ER.\ndepends on both the parameterization of the model and on the input data. The parameterizatior\nmay be conditioned by placing appropriate constraints on W. It is worth keeping in mind that the\nJacobian D is typically contractive, thus tending to be norm-reducing) and is also data-dependent\nwhereas W can vary from being contractive to norm-preserving, to expansive and applies the same\ngain on the forward signal as on the back-propagated gradient signal.\nVanishing and exploding gradients can be controlled to a large extent by controlling the maximun\nand minimum gain of W. The maximum gain of a matrix W is given by the spectral norm whict\nis given by\nHowever, it is possible to formulate a more direct parameterization or factorization for W which per-\nmits hard bounds on the amount of expansion and contraction induced by W. This can be achieved\nby simply parameterizing W according to its singular value decomposition, which consists of the\ncomposition of orthogonal basis matrices U and V with a diagonal spectral matrix S containing the\nsingular values which are real and positive by definition. We have\nW =USV\".\nSince the spectral norm or maximum gain of a matrix is equal to its largest singular value, this\ndecomposition allows us to control the maximum gain or expansivity of the weight matrix by con-\ntrolling the magnitude of the largest singular value. Similarly, the minimum gain or contractivity of\na matrix can be obtained from the minimum singular value.\nWe can keep the bases U and V orthogonal via geodesic gradient descent along the set of weights\nthat satisfy UTU = I and V'V = I respectively. The submanifolds that satisfy these constraints\nare called Stiefel manifolds. We discuss how this is achieved in more detail below, then discuss our\nconstruction for bounding the singular values.\nDuring optimization, in order to maintain the orthogonality of an orthogonally-initialized matrix\nM, i.e. where M = U, M = V orM = W if so desired, we employ a Cayley transformation\nof the update step onto the Stiefel manifold of (semi-)orthogonal matrices, as in |Nishimori] (2005)\n\nand [Tagare|(207 1). Given an orthogonally-initialized parameter matrix M and its Jacobian, G with\nrespect to the objective function, an update is performed as follows:\nWhile the update rule in (9) allows us to maintain an orthogonal hidden to hidden transition matrix\nW if desired, we are interested in exploring the effect of stepping away from the Stiefel manifold. As\nsuch, we parameterize the transition matrix W in factorized form, as a singular value decompositior\nwith orthogonal bases U and V updated by geodesic gradient descent using the Cayley transform\napproach above.\nIf W is an orthogonal matrix, the singular values in the diagonal matrix S are all equal to one.\nHowever, in our formulation we allow these singular values to deviate from one and employ a\nsigmoidal parameterization to apply a hard constraint on the maximum and minimum amount of\n||Wx/|\n\n||W||2 = max\n|x|\nBy keeping our weight matrix W close to orthogonal, one can ensure that it is close to a norm-\npreserving transformation (where the spectral norm is equal to one, but the minimum gain is also\none). One way to achieve this is via a simple soft constraint or regularization term of the form:\nAS7 |WPW; = IP.\nA= GM \u2014-MG\nMrew =M+ (I+ gay - 7A),\nwhere A is a skew-symmetric matrix (that depends on the Jacobian and on the parameter matrix)\nwhich is mapped to an orthogonal matrix via a Cayley transform and 7 is the learning rate.\ndeviation. Specifically, we define a margin m around 1 within which the singular values must lie.\nThis is achieved with the parameterization\nThe singular values are thus restricted to the range [1 \u2014 m, 1 + m] and the underlying parameter:\np; are updated freely via stochastic gradient descent. Note that this parameterization strategy als\nhas implications on the step sizes that gradient descent based optimization will take when updatin;\nthe singular values \u2014 they tend to be smaller compared to models with no margin constraining thei\nvalues. Specifically, a singular value\u2019s progression toward a margin is slowed the closer it is to th\nmargin. The sigmoidal parameterization can also impart another effect on the step size along th\nspectrum which needs to be accounted for. Considering [10] the gradient backpropagation of som\u00ab\nloss L toward parameters p; is found as\nFrom (1p, it can be seen that the magnitude of the update step for p; is scaled by the margin\nhyperparameter m. This means for example that for margins less than one, the effective learning\nrate for the spectrum is reduced in proportion to the margin. Consequently, we adjust the learning\nrate along the spectrum to be independent of the margin by renormalizing it by 2m.\nThis margin formulation both guarantees singular values lie within a well defined range and slows\ndeviation from orthogonality. Alternatively, one could enforce the orthogonality of U and V and\nimpose a regularization term corresponding to a mean one Gaussian prior on these singular values.\nThis encourages the weight matrix W to be norm preserving with a controllable strength equivalent\nto the variance of the Gaussian. We also explore this approach further below."}, {"section_index": "3", "section_name": "3 EXPERIMENTS", "section_text": "In this section, we explore hard and soft orthogonality constraints on factorized weight matrice:\nfor recurrent neural network hidden to hidden transitions. With hard orthogonality constraints or\nU and V, we investigate the effect of widening the spectral margin or bounds on convergence\nand performance. Loosening these bounds allows increasingly larger margins within which the\ntransition matrix W can deviate from orthogonality. We confirm that orthogonal initialization i:\nuseful as noted in|Henaff et al.] (2016), and we show that although strict orthogonality guarantee:\nstable gradient norm, loosening orthogonality constraints can increase the rate of gradient descen\nconvergence. We begin our analyses on tasks that are designed to stress memory: a sequence copying\ntask and a basic addition task (Hochreiter & Schmidhuber}|1997). We then move on to tasks on rea\ndata that require models to capture long-range dependencies: digit classification based on sequentia\n\nand permuted MNIST vectors (Le et al.| 2015} LeCun et al. 1998). Finally, we look at a basic\n\nlaneuage modeling task using the Penn Treebank dataset (Marcus et al.|/1993).\nThe copy and adding tasks, introduced by|Hochreiter & Schmidhuber| 1997p, are synthetic bench\nmarks with pathologically hard long distance dependencies that require long-term memory in mod\nels. The copy task consists of an input sequence that must be remembered by the network, followe:\nby a series of blank inputs terminated by a delimiter that denotes the point at which the network mus\nbegin to output a copy of the initial sequence. We use an input sequence of T + 20 elements tha\nbegins with a sub-sequence of 10 elements to copy, each containing a symbol a; \u20ac {a7,..., a) } ou\nof p = 8 possible symbols. This sub-sequence is followed by T\u2019 \u2014 1 elements of the blank categor\nag which is terminated at step T by a delimiter symbol a, , and 10 more elements of the blanl\ncategory. The network must learn to remember the initial 10 element sequence for T time steps anc\noutput it after receiving the delimiter symbol.\nThe goal of the adding task is to add two numbers together after a long delay. Each number is\nrandomly picked at a unique position in a sequence of length T. The sequence is composed of\nT values sampled from a uniform distribution in the range (0,1), with each value paired with an\nindicator value that identifies the value as one of the two numbers to remember (marked 1) or as a\nvalue to ignore (marked 0). The two numbers are positioned randomly in the sequence, the first in\nthe range {0, 4 \u2014 1] and the second in the range [5,7 \u2014 1], where 0 marks the first element. The\nnetwork must learn to identify and remember the two numbers and output their sum.\n8; = 2m(a(p;) \u2014 0.5) +1, s; \u20ac {diag(S)}, m \u20ac [0, 1].\ndL ds; dL 9 do(p;) dL\ndp, dpids; dp, ds;\nThe sequential MNIST task from|Le et 15), MNIST digits are flattened into vectors that car\nbe traversed sequentially by a recurrent neural network. The goal is to classify the digit based or\nthe sequential input of pixels. The simple variant of this task is with a simple flattening of the imag\u00ab\nmatrices; the harder variant of this task includes a random permutation of the pixels in the inpu\nvector that is determined once for an experiment. The latter formulation introduces longer distance\ndependencies between pixels that must be interpreted by the classification model.\nThe English Penn Treebank (PTB) dataset from|Marcus et al.] s an annotated corpus of Er\nglish sentences, commonly used for benchmarking language models. We employ a sequential chai\nacter prediction task: given a sentence, a recurrent neural network must predict the next character <\neach step, from left to right. We use input sequences of variable length, with each sequence contair\ning one sentence. We model 49 characters including lowercase letters (all strings are in lowercase\nnumbers, common punctuation, and an unknown character placeholder. In our experiments on tw\nsubsets of the data: in the first, we first use 23% of the data with strings with up to 75 characters an\nin the second we include over 99% of the dataset, picking strings with up to 300 characters.\nIn this section, we experimentally explore the effect of loosening hard orthogonality constraint\nthrough loosening the spectral margin defined above for the hidden to hidden transition matrix.\nIn all experiments, we employed RMSprop (Tieleman & Hinton] /2012) when not using geodes\n\ngradient descent. We used minibatches of size 50 and for generated data (the copy and addir\ntasks), we assumed an epoch length of 100 minibatches. We cautiously introduced gradient clippir\nat magnitude 100 (unless stated otherwise) in all of our RNN experiments although it may not t\nrequired and we consistently applied a small weight decay of 0.0001. Unless otherwise specifie!\nwe trained all simple recurrent neural networks with the hidden to hidden matrix factorization ;\nin (8) using geodesic gradient descent on the bases (learning rate 10~\u00b0) and RMSprop on the oth\nparameters (learning rate 0.0001), using a tanh transition nonlinearity, and clipping gradients of 1(\nmagnitude. The neural network code was built on the Theano framework (Theano Developme:\n[Team] 2016). When parameterizing a matrix in factorized form, we apply the weight decay on t\ncomposite matrix rather than on the factors in order to be consistent across experiments. For MNIS\nand PTB, test set metrics were computed based on the parameterization that gave the best validatic\nset accuracy.\nFor different sequence lengths T of the copy and adding tasks, we trained a factorized RNN with 12:\nhidden units and various spectral margins m. For the copy task, we used Elman networks withou\n\na transition non-linearity as in|Henaff et al.](2016). We discuss our investigations into the use of |\n\nnon-linearity on the copy task in the Appendix.\nAs shown in Figure [I] we see an increase in the rate of convergence as we increase the spectral\nmargin. This observation generally holds across the tested sequence lengths (7 = 200, T = 500,\nT = 1000, T = 10000); however, large spectral margins hinder convergence on extremely long\nsequence lengths. At sequence length T = 10000, parameterizations with spectral margins larger\nthan 0.001 converge slower than when using a margin of 0.001. In addition, the experiment without\na margin failed to converge on the longest sequence length. This follows the expected pattern where\nstepping away from the Stiefel manifold may help with gradient descent optimization but loosening\northogonality constraints can reduce the stability of signal propagation through the network.\nFor the adding task, we trained a factorized RNN on T' = 1000 length sequences, using a ReLU\nactivation function on the hidden to hidden transition matrix. The mean squared error (MSE) is\nshown for different spectral margins in Figure|5}in the Appendix. Testing spectral margins m = 0.\nm= 1, m=10, m= 100, and no margin, we find that the models with the purely orthogonal\n(m = 0) and the unconstrained (no margin) transition matrices failed to begin converging beyond\nbaseline MSE within 2000 epochs.\n\u2014\u2014 no margin\n\n0.9, 00, 00,\n\nOo 620 \u00ab40 \u00ab6080 100 0.05 20 40 60 80 100 120 140 160 0 50 100 \u00ab150 = 200 \u20180 50 100 150 200 250 300\nnumber of epochs number of epochs number of epochs number of epochs\nFigure 1: Accuracy curves on the copy task for sequence lengths of (from left to right) T=200,\nT=500, T=1000, T=10000 given different spectral margins. Convergence speed increases with mar-\ngin size; however, large margin sizes are ineffective at longer sequence lengths (T=10000, right).\nmargin initialization accuracy\n\n0 orthogonal 7718\n0.001 orthogonal 79.26\n0.01 orthogonal 85.47\n0.1 orthogonal 94.10\n1 orthogonal 93.84\nnone orthogonal 93.24\nnone Glorotnormal 66.71\nnone identity 53.53\n\nLSTM 97.30\nTable 1: Ordered sequential MNIST classifica-\ntion with different margin sizes and an LSTM."}, {"section_index": "4", "section_name": "3.1.2 PERFORMANCE ON REAL DATA", "section_text": "Having confirmed that an orthogonality constraint can negatively impact convergence rate, we seek\nto investigate the effect on model performance for tasks on real data. We show the results of experi-\nments on permuted sequential MNIST in Table[2Jand ordered sequential MNIST in Table[]] The loss\ncurves are shown in Figure [6] in the Appendix and reveal an increased convergence rate for larger\nspectral margins. We trained the factorized RNN models with 128 hidden units for 120 epochs. We\nalso trained an LSTM with 128 hidden units on both tasks for 150 epochs, configured with peep-\nhole connections, orthogonally initialized (and forget gate bias initialized to one), and trained with\nRMSprop (learning rate 0.0001, clipping gradients of magnitude 1).\nWe show the results of experiments on PTB character prediction, in terms of bits per character (bpc)\nand prediction accuracy, for a subset of short sequences (up to 75 characters; 23% of data) in Table\nBland for a subset of long sequences (up to 300 characters; 99% of data) in Table|4| We trained\n\nactorized RNN models with 512 hidden units for 200 epochs with geodesic gradient descent on the\nbases (learning rate 10~\u00b0) and RMSprop on the other parameters (learning rate 0.001), using a tanh\ntransition nonlinearity, and clipping gradients of 30 magnitude.\nInterestingly, for both the ordered and permuted sequential MNIST tasks, models with a non-zero\nmargin significantly outperform those that are constrained to have purely orthogonal transition matri-\nmargin initialization bpe accuracy\n\n0) orthogonal 2.16 55.31\n0.01 orthogonal 2.16 55.33\n0.1 orthogonal 2.12 55.37\n\n1 orthogonal 2.06 57.07\n100 orthogonal 2.04 57.51\nnone orthogonal 2.06 57.38\nnone Glorotnormal 2.08 57.37\nnone identity 2.25 53.83\nTable 3: Character prediction on PTB sentences\nof to 75 characters, using different margins.\n0.9,\n\n20\n\n40 60 80\nnumber of epochs\n\n100\n\n0.05 20 40 60 80 100 120 140 160\nnumber of epochs\n\n00,\n\n50 100 150\number of epochs\n\n200\n\n10\nos\n2o5\nBos\na2\n\n0550\n\n\u2014\u2014 no margin\n\n100 150 200 260 300\nnumber of epochs\nmargin initialization accuracy\n\n0 orthogonal 83.56\n0.001 orthogonal 84.59\n0.01 orthogonal 89.63\n0.1 orthogonal 91.44\n\n1 orthogonal 90.83\nnone orthogonal 90.51\nnone Glorotnormal 79.33\nnone identity 42.72\n\nLSTM 92.62\nTable 2: Permuted sequential MNIST classifica-\ntion with different margin sizes and an LSTM.\nmargin initialization bpc accuracy\n\n0) orthogonal 2.20 54.88\n0.01 orthogonal 2.20 54.83\n0.1 orthogonal 2.24 54.10\n\n1 orthogonal 2.36 51.12\n100 orthogonal 2.36 51.20\nnone orthogonal 2.34 51.30\nnone Glorotnormal 2.34 51.04\nnone identity 2.68 45.35\nTable 4: Character prediction on PTB sentences\nof up to 300 characters, using different margins.\nces (margin of zero). The best results on both the ordered and sequential MNIST tasks were yielde\nby models with a spectral margin of 0.1, at 94.10% accuracy and 91.44% accuracy, respectively. At\nLSTM outperformed the RNNs in both tasks; nevertheless, RNNs with hidden to hidden transition\ninitialized as orthogonal matrices performed admirably without a memory component and withou\nall of the additional parameters associated with gates. Indeed, orthogonally initialized RNNs pet\nformed almost on par with the LSTM in the permuted sequential MNIST task which presents longe\ndistance dependencies than the ordered task. Although the optimal margin appears to be 0.1, RNN\nwith large margins perform almost identically to an RNN without a margin, as long as the transitio\nmatrix is initialized as orthogonal. On these tasks, orthogonal initialization appears to significantl\noutperform Glorot normal initialization or initializing the matrix as iden\ntity. It is interesting to note that for the MNIST tasks, orthogonal initialization appears useful whil\northogonality constraints appear mainly detrimental. This suggests that while orthogonality help\nearly training by stabilizing gradient flow across many time steps, orthogonality constraints ma\nneed to be loosened on some tasks so as not to over-constrain the model\u2019s representational ability.\nCuriously, larger margins and even models without sigmoidal constraints on the spectrum (no mar:\ngin) performed well as long as they were initialized to be orthogonal, suggesting that evolution away\nfrom orthogonality is not a serious problem on MNIST. It is not surprising that orthogonality is use:\nful for the MNIST tasks since they depend on long distance signal propagation with a single output a\nthe end of the input sequence. On the other hand, character prediction with PTB produces an outpu\nat every time step. Constraining deviation from orthogonality proved detrimental for short sentence:\n(Table|3) and beneficial when long sentences were included (Table A. Furthermore, Glorot norma\ninitialization did not perform worse than orthogonal initialization for PTB. Since an output is gen:\nerated for every character in a sentence, short distance signal propagation is possible. Thus it i:\npossible that the RNN is first learning very local dependencies between neighbouring characters anc\nthat given enough context, constraining deviation from orthogonality can help force the network t\u00a2\nlearn longer distance dependencies."}, {"section_index": "5", "section_name": "3.1.3. SPECTRAL AND GRADIENT EVOLUTION", "section_text": "It is interesting to note that even long sequence lengths (T=1000) in the copy task can be solvec\nefficiently with rather large margins on the spectrum. In Figure [2] we look at the gradient propaga:\ntion of the loss from the last time step in the network with respect to the hidden activations. We car\nsee that for a purely orthogonal parameterization of the transition matrix (when the margin is zero)\nthe gradient norm is preserved across time steps, as expected. We further observe that with increas.\ning margin size, the number of update steps over which this norm preservation survives decreases\nthough surprisingly not as quickly as expected.\nFigure 2: The norm of the gradient of the loss from the last time step with respect to the hidder\nunits at a given time step for a length 220 RNN over 1000 update iterations for different margins\nIterations are along the abscissa and time steps are denoted along the ordinate. The first columr\nmargins are: 0, 0.001, 0.01. The second column margins are: 0.1, 1, no margin. Gradient norms are\nnormalized across the time dimension.\nAlthough the deviation of singular values from one should be slowed by the sigmoidal parameteriza:\ntions, even parameterizations without a sigmoid (no margin) can be effectively trained for all but the\nlongest sequence lengths. This suggests that the spectrum is not deviating far from orthogonality anc\nthat inputs to the hidden to hidden transitions are mostly not aligned along the dimensions of great\nWe visualize the spread of singular values for different model parameterizations on the permuted se:\nquential MNIST task in Figure[3] Curiously, we find that the distribution of singular values tends t\u00ab\nshift upward to a mean of approximately 1.05 on both the ordered and permuted sequential MNIST\ntasks. We note that in those experiments, a tanh transition nonlinearity was used which is contractive\nin both the forward signal pass and the gradient backward pass. An upward shift in the distributior\nof singular values of the transition matrix would help compensate for that contraction. Indeed, (Saxe\ndescribe this as a possibly good regime for learning in deep neural networks. That the\nmodel appears to evolve toward this regime suggests that deviating from it may incur a cost. Thi:\nis interesting because the cost function cannot take into account numerical issues such as vanish:\ning or exploding gradients (or forward signals); we do not know what could make this deviatior\ncostly. That the transition matrix may be compensating for the contraction of the tanh is supportec\nby further experiments: applying a 1.05 pre-activation gain appears to allow a model with a margir\nof 0 to nearly match the top performance reached on both of the MNIST tasks. Furthermore, wher\nusing the OPLU norm-preserving activation function (Chernodub & Nowickij [2016), we found tha\northogonally initialized models performed equally well with all margins, achieving over 90% ac\ncuracy on the permuted sequential MNIST task. Unlike orthgonally initialized models, the RNN\non the bottom right of Figure [3] with Glorot normal initialized transition matrices, begins and end:\nwith a wide singular spectrum. While there is no clear positive shift in the distribution of singulai\nvalues, the mean value appears to very gradually increase for both the ordered and permuted sequen:\ntial MNIST tasks. If the model is to be expected to positively shift singular values to compensate\nfor the contractivity of the tanh nonlinearity, it is not doing so well for the Glorot-initialized case\nhowever, this may be due to the inefficiency of training as a result of vanishing gradients, given tha\ninitialization.\n1.20 1.20\n3 3\n\n3 3\n\nee ee ls ee ee\n9 9\na |:\n\n0.95 a a\n3 3\n\n3 3\n\nee ee ls ee ee\n\nnumber of epochs\n\nnumber of epochs\n\n1.20\n\n118\n110\nO\u2014\u2014_\u2014_\u2014\n609!\n0.95\n0.20\n085\n0.80,\n0 so 00=S 150200\number of epochs\n28\na ee ee ee\n20\n18\nos\n00\n0 3100 ~~\u00ab180~\u00abOO\nnumber of epochs\nest expansion or contraction. We evaluated the spread of the spectrum in all of our experiments and\nfound that indeed, singular values tend to stay well within their prescribed bounds and only reach\nthe margin when using a very large learning rate that does not permit convergence. Furthermore,\nwhen transition matrices are initialized as orthogonal, singular values remain near one throughout\ntraining even without a sigmoidal margin for tasks that require long term memory (copy, adding,\nsequential MNIST). On the other hand, singular value distributions tend to drift away from one for\nPTB character prediction which may help explain why enforcing an orthogonality constraint can\nbe helpful for this task, when modeling long sequences. Interestingly, singular values spread out\nless for longer sequence lengths (nevertheless, the T=10000 copy task could not be solved with no\nsigmoid on the spectrum).\nFigure 3: Singular value evolution on the permuted sequential MNIST task for factorized RNNs\nwith different margin sizes. Margins are, from left to right: top row: 0.001, 0.01, 0.1; bottom row: 1,\nno margin, no margin. The singular value distributions are summarized with the mean (green line,\ncenter) and standard deviation (green shading about mean), minimum (red, bottom) and maximum\n(blue, top) values. All models are initialized with orthogonal hidden to hidden transition matrices\nexcept for the model on the bottom right where Glorot normal initialization is used.\nHaving established that it may indeed be useful to step away from orthogonality, here we explore\ntwo forms of soft constraints (rather than hard bounds as above) on hidden to hidden transition\nmatrix orthogonality. The first is a simple penalty that directly encourages a transition matrix W tc\nbe orthogonal, of the form \\|/W7W \u2014 I||3. This is similar to the orthogonality penalty introduced\nby [Henaff et al.| (2016). In the first two subfigures on the left of Figure[4] we explore the effec\nof weakening this form of regularization. We trained both a regular non-factorized RNN on the\nT = 200 copy task and a factorized RNN with orthogonal bases on the T = 500 copy task. Fo\nthe regular RNN, we had to reduce the learning rate to 10~\u00b0. Here again we see that weakening the\nstrength of the orthogonality-encouraging penalty can increase convergence speed.\n\u2018200 400 600 800\n\u2018number of epochs\n\n1000\n\n20 40 60 80\n\u2018umber of epochs\n\n100\n\n0.001\n0.01\not\n\n10\n100\n\n10\n0\n$04\n02\n\n0.05\n\n50\n\n100 150 200 250 300\nnumber of epochs\n\n10\noe\nz\nBos\nBos\n02\n0.0,\n\n9-50 100 150 200 250 300\nnumber of epochs\n\nad\n\n0.0001\n0.001\n0.01\not\n\n10\n100\n\u2018200 400 600 800\n\u2018number of epochs\n\n1000\n\n20 40 60 80\number of epochs\n\n100\n\n0.001\n0.01\not\n\n10\n100\n\n10\n0\n$04\n02\n\n0.05\n\n50\n\n100 150 200 250 300\nnumber of epochs\n\n10\noe\nz\nBos\nBos\n02\n\noof\n\n50\n\na)\n\n100 150 200 250 300\nnumber of epochs\nThe second approach we explore replaces the sigmoidal margin parameterization with a mean one\nGaussian prior on the singular values. In the two right subfigures of Figure(4| we visualize the accu-\nracy on the length 200 copy task, using geoSGD (learning rate 10~\u00b0) to keep U and V orthogonal\nand different strengths of a Gaussian prior with mean one on the singular values. We trained thes\u00a2\nexperiments with regular SGD on the spectrum and other non-orthogonal parameter matrices, using\na 107\u00b0 learning rate. We see that priors which are too strong lead to slow convergence. Loosening\nthe strength of the prior makes the optimization more efficient. Furthermore, we compare a direc\nparameterization of the spectrum (no sigmoid) in Figure[4]with a sigmoidal parameterization, using\na large margin of 1. Without the sigmoidal parameterization, optimization quickly becomes unsta-\nble; on the other hand, the optimization also becomes unstable if the prior is removed completely in\nthe sigmoidal formulation (margin 1). These results further motivate the idea that parameterization:\nthat deviate from orthogonality may perform better than purely orthogonal ones, as long as they are\nsufficiently constrained to avoid instability during training."}, {"section_index": "6", "section_name": "4 CONCLUSIONS", "section_text": "We have explored a number of methods for controlling the expansivity of gradients during backprop\nagation based learning in RNNs through manipulating orthogonality constraints and regularizatio!\non matrices. Our experiments indicate that while orthogonal initialization may be beneficial, main\ntaining constraints on orthogonality can be detrimental. Indeed, moving away from hard constraint\non matrix orthogonality can help improve optimization convergence rate and model performance\nHowever, we also observe with synthetic tasks that relaxing regularization which encourages th\nspectral norms of weight matrices to be close to one, or allowing bounds on the spectral norms o\nweight matrices to be too wide, can reverse these gains and may lead to unstable optimization."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank the Natural Sciences and Engineeering Research Council (NSERC) of Canada and Sam-\nsung for supporting this research.\n\u201cigure 4: Accuracy curves on the copy task for different strengths of soft orthogonality constraints.\n\\ soft orthogonality constraint is applied to the transition matrix W for a regular RNN on T = 200\nLeft) and the same is applied on a factorized RNN on T' = 500 (Left center). Another constraint\nn the form of a mean one Gaussian prior on the singular values is applied to a factorized RNN or\nI = 200 (Right center); the same is applied to a factorized RNN with a sigmoidal parameterization\nyf the spectrum, using a large margin of | (Right). Loosening orthogonality speeds convergence."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Artem Chernodub and Dimitri Nowicki. Norm-preserving orthogonal permutation linear unit acti\nvation functions (oplu). arXiv preprint arXiv: 1604.02313. 2016.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neura\nnetworks. In Aistats, volume 9, pp. 249-256, 2010.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 9(8):\n1735-1780. 1997.\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neura\nnetworks. ICML (3), 28:1310-1318, 2013.\nAndrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynam-\nics of learning in deep linear neural networks. arXiv preprint arXiv: 1312.6120, 2013.\nHemant D Tagare. Notes on optimization on stiefel manifolds. Technical report, Tech. Rep., Yale\nUniversity, 2011.\nScott Wisdom, Thomas Powers, John R. Hershey, Jonathan Le Roux, and Les Atlas. Full-capacity\nunitary recurrent neural networks. Jo appear in NIPS, 2016.\nMikael Henaff, Arthur Szlam, and Yann LeCun. Orthogonal rns and long-memory tasks. arXiv\npreprint arXiv: 1602.06662, 2016.\nDavid Krueger and Roland Memisevic. Regularizing rns by stabilizing activations. arXiv preprint\narXiv:1511.08400, 2015.\nYann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to\ndocument recognition. Proceedings of the IEEE, 86(11):2278\u20142324, 1998.\nMitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated\ncorpus of english: The penn treebank. Computational linguistics, 19(2):313\u2014330, 1993.\nYasunori Nishimori. A note on riemannian optimization methods on the stiefel and the grassmann\nmanifolds. dim, 1:2, 2005.\nbated\n0.26\n0.20\n3 0.15\n2\n\n0.10\n0.05\n\n0.00\n\n500\n\n1000\nnumber of epochs\n\n1500\n\n2000\n\nm0\nm1\nm=10\nm=100\nno margin\n0.26\n0.20\nB 0.18\n2\n\n0.10\n0.05\n\n0.00\n\n500\n\n1000\nnumber of epochs\n\n1500\n\n2000\n\nm0\nm1\nm=10\nm=100\nno margin\n50\n\n100\nnumber of epochs\n\n150\n\n200\n\ncost\n\n25\n\n20\n\n15\n\n1.0\n\nos\n\n0.0\n\n50\n\n100\nnumber of epochs\n\n150\n\n200\n\nm=0\nm=0.001\nm=0.01\nm=0.1\nm=1\n\nno margin\nglorot\nidentity\n20 \u2014_\n\n\u2014 001\n16 a of\ng \u2014 1\n8 \u2014_\n10 \u2014 no margin\n\u2014 glorot\n0s \u2014 identity\n0.0\n0 50 100 150 200 0 50 100 150 200\n\nnumber of epochs number of epochs\nFigure 6: Loss curves for different factorized RNN parameterizations on the sequential MNIST\ntask (left) and the permuted sequential MNIST task (right). The spectral margin is denoted by m\nmodels with no margin have singular values that are directly optimized with no constraints; Gloro\nrefers to a factorized RNN with no margin that is initialized with Glorot normal initialization."}, {"section_index": "9", "section_name": "5.2 COPY TASK NONLINEARITY", "section_text": "Figure 5: Mean squared error (MSE) curves on the adding task for different spectral margins m.\nFor a trivial baseline solution of always outputting the same number, the expected baseline MSE is\n0.167.\nWe found that nonlinearities such as a rectified linear unit (ReLU) (Nair & Hinton} |2010) or hy-\n\nperbolic tangent (tanh) made the copy task far more difficult to solve. Using tanh, a short se-\nquence length (T\u2019 = 100) copy task required both a soft constraint that encourages orthogonality\nand thousands of epochs for training. It is worth noting that in the unitary evolution recurrent neu-\nral network of| (2015), the non-linearity (referred to as the \"modReLU\u201d) is actually\ninitialized as an identity operation that is free to deviate from identity during training. Further-\nmore, |Henaff et al.| ) derive a solution mechanism for the copy task that drops the non-linearity\nfrom an RNN. To explore this further, we experimented with a parametric leaky ReLU activation\nfunction (PReLU) which introduces a trainable slope a for negative valued inputs x, producing\nf(x) = max(x,0) + amin(\u00ab,0) ( . Setting the slope a to one would make the\nPReLU equivalent to an identity function. We experimented with clamping a to 0.5, 0.7 or 1 ina\nfactorized RNN with a spectral margin of 0.3 and found that only the model with a = 1 solved the\nT = 1000 length copy task. We also experimented with a trainable slope a, initialized to 0.7 and\nfound that it converges to 0.96, further suggesting the optimal solution for the copy task is without\na transition nonlinearity. Since the copy task is purely a memory task, one may imagine that a tran-\nsition nonlinearity such as a tanh or ReLU may be detrimental to the task as it can lose information.\nThus, we also tried a recent activation function that preserves information, called an orthogonal per-\nmutation linear unit (OPLU) (2016). The OPLU preserves norm, making\na fully norm-preserving RNN possible. Interestingly, this activation function allowed us to recover\nidentical results on the copy task to those without a nonlinearity for different spectral margins.\nAlthough the method proposed in section|2|relies on a matrix inversion, an operation with O(n?)\ncomplexity for an n x n matrix, the running time of an RNN factorized in such a way actually\nremains reasonable. This running time is summarized in Table [5] and includes all computations\nin the graph, together with the matrix inversion. As this method is meant to be used only for the\nanalysis in this work, we find the running times acceptable for that purpose. Models were run on an\nNvidia GTX-770 GPU and were run against the T=100 length copy task.\nhidden units\n\nSGD geoSGD\n\n128\n500\n1000\n\n0.2 40.4\n\n0.1\n\n21.9\n46.7 4\n\n\u00a30.2 161.44\n\nt 0.2\n\n95.4 4\n\n\u00a30.3 711.24\n\nt 0.8\nTable 5: Run time in seconds for 1000 itera-\ntions on a T=100 copy task of a regular RNN\ntrained with stochastic gradient descent (SGD)\ncompared against a factorized RNN trained with\ngeodesic SGD on the bases (geoSGD) and reg-\nular SGD for other parameters."}]
B1KBHtcel
[{"section_index": "0", "section_name": "HERE\u2019S My POINT: ARGUMENTATION MINING WITH\nPOINTER NETWORKS", "section_text": "Peter Potash, Alexey Romanov & Anna Rumshisky\n{opotash, aromanov, arum}@cs.uml.edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Computational approaches to argument mining/understanding have become very popular (Persins\n\n& Ng]||2016}/Cano-Basave & He||2016}{Wei et al.| 2016} Ghosh et al} 2016} [Palau & Moens||2009\nHabernal & Gurevych] 2016). One important avenue in this work is to understand the structure ir\nOne fundamental assumption when working with argumentative text is the pres:\nence of Arguments Components (ACs). The types of ACs are generally characterized as a claim o1\n\na premise 2013), with premises acting as support (or possibly attack) units for claims. Tc\nmodel more complex structures of arguments, some annotation schemes also include a major clain\n\nAC type (Stab & Gurevych| 2016} 20146).\nThere are two key assumptions our work makes going forward. First, we assume subtask | has\nbeen completed, i.e. ACs have already been identified. Second, we follow previous work that\n\nassumes a tree structure for the linking of ACs (Palau & Moens||2009 1987} |Peldszus &\n[2016) Specifically, a given AC can only have a single outgoing\n\nlink, but can have numerous incoming links. Furthermore, there is a \u2018head\u2019 component that has"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "One of the major goals in automated argumentation mining 1s to uncover the argu-\nment structure present in argumentative text. In order to determine this structure,\none must understand how different individual components of the overall argument\nare linked. General consensus in this field dictates that the argument components\nform a hierarchy of persuasion, which manifests itself in a tree structure. This\nwork provides the first neural network-based approach to argumentation mining,\nfocusing on extracting links between argument components, with a secondary fo-\ncus on classifying types of argument components. In order to solve this problem,\nwe propose to use a modification of a Pointer Network architecture. A Pointer\nNetwork is appealing for this task for the following reasons: 1) It takes into ac-\ncount the sequential nature of argument components; 2) By construction, it en-\nforces certain properties of the tree structure present in argument relations; 3) The\nhidden representations can be applied to auxiliary tasks. In order to extend the\ncontribution of the original Pointer Network model, we construct a joint model\nthat simultaneously attempts to learn the type of argument component, as well as\ncontinuing to predict links between argument components. The proposed model\nachieves state-of-the-art results on two separate evaluation corpora. Furthermore,\nour results show that optimizing for both tasks, as well as adding a fully-connected\nlayer prior to recurrent neural network input, is crucial for high performance.\nGenerally, the task of processing argument structure encapsulates four distinct subtasks: 1) Given a\nsequence of tokens that represents an entire argumentative text, determine the token subsequences\nthat constitute non-intersecting ACs; 2) Given an AC, determine the type of AC (claim, premise,\netc.); 3) Given a set/list of ACs, determine which ACs have a link that determine overall argument\nstructure; 4) Given two linked ACs, determine whether the link is of a supporting or attacking\nrelation. In this work, we focus on subtasks 2 and 3.\nFirst, [cloning_will_be beneficial for\nmany people who are in need of organ\ntransplants]c1. In addition, [it shortens\nthe healing process]acz. Usually, [it\nis very rare to find an appropriate\norgan donor]ac3 and [by using cloning\nin order to raise required organs\nthe waiting time can be shortened\n\ntremendously] cq.\nFigure 1: An example of argument structure with four ACs. The left side shows raw text that has\nbeen annotated for the presence of ACs. Squiggly and straight underlining means an AC is a claim\nor premise, respectively. The ACs in the text have also been annotated for links to other ACs, which\nis show in the right figure. ACs 3 and 4 are premises that link to another premise, AC2. Finally, AC2\nlinks to a claim, AC1. AC1 therefore acts as the central argumentative component.\nno outgoing link (the top of the tree). Figure [I] shows an example that we will use throughout the\npaper to concretely explain how our approach works. First, the left side of the figure presents the\nraw text of a paragraph in a persuasive essay (Stab & Gurevych] 2074), with the ACs contained\nin square brackets. Squiggly verse straight underlining differentiates between claims and premises,\nrespectively. The ACs have been annotated as to how the ACs are linked, and the right side of the\nfigure reflects this structure. The argument structure with four ACs forms a tree, where AC2 has\ntwo incoming links, and AC1 acts as the head, with no outgoing links. We also specify the type of\nAC, with the head AC marked as claim and the remaining ACs marked as premise. Lastly, we note\nthat the order of arguments components can be a strong indicator of how components should related.\nLinking to the first argument component can provide a competitive baseline heuristic (Peldszus &|\n{Stedel {2015} {Stab & Gurevych] 2016). Peldszus &\nGiven the task at hand, we propose a modification of a Pointer Network (PN)\nA PN is a sequence-to-sequence model that outputs a distribution over the encoding indices at eac\nJecoding timestep. The PN is a promising model for link extraction in argumentative text becaus\nit inherently possesses three important characteristics: 1) it is able to model the sequential nature c\nACs; 2) it constrains ACs to have a single outgoing link, thus partly enforcing the tree structure; 2\nhe hidden representations learned by the model can be used for jointly predicting multiple subtask:\n\nWe also note that since a PN is a type of sequence-to-sequence model (Sutskever et al.||2\n\nillows the entire sequence to be seen before making prediction. This is important because if th\noroblem were to be approached as standard sequence modeling 200%\nRobinson| [1994), making predictions at each forward timestep, it would only allow links to\nhat have already been seen. This is equivalent to only allowing backward links. We note that we d\nest a simplified model that only uses hidden states from an encoding network to make prediction:\n1s opposed to the sequence-to-sequence architecture present in the PN (see Section|5).\nPNs were originally proposed to allow a variable length decoding sequence (Vinyals et al.||2015b)\n\nAlternatively, the PN we implement differs from the original model in that we decode for the same\nnumber of timesteps as there are input components. We also propose a joint PN for both extracting\nlinks between ACs and predicting the type of AC. The model uses the hidden representation o!\nACs produced during the encoding step (see Section 4). Aside from the partial assumption o:\ntree structure in the argumentative text, our models do not make any additional assumptions abou!\nthe AC types or connectivity, unlike the Po ae We evaluate our models on the\ncorpora of |Stab & Gurevych and (2014), and compare our results with the result\n\nof the aformentioned authors.\nRecent work in argumentation mining offers data-driven approaches for the task of predicting link:\n\nbetween ACs. |Stab & Gurevych}(2014b) approach the task as a binary classification problem. The\nACL\nClaim\n\nAC2\nPremise\nauthors train an SVM with various semantic and structural features. |Peldszus & Stede| (2015\nhave also used classification models for predicting the presence of links. Various authors hav\nalso proposed to jointly model link extraction with other subtasks from the argumentation minin\n\npipeline, using either an Integer Linear Programming (ILP) framework (Persing & Ng] gl POG Sta\nT\n\n|& Gurevych] 2016) or directly feeding previous subtask predictions into another model. orme\n\"oint approaches are evaluated on annotated corpora of persuasive essays (Stab & Gurevych | 2014:\n), and the latter on a corpus of microtexts (Peldszus 2014). The ILP framework is effectiv\nin enforcing a tree structure between ACs when predictions are made from otherwise naive bas\n\nclassifiers.\nUnrelated to argumentation mining specifically, recurrent neural networks have previously beet\nproposed to model tree/graph structures in a linear manner. [Vinyals et al.] c) use a sequence:\nto-sequence model for the task of syntactic parsing. The authors linearize input parse graphs using\na depth-first search, allowing it to be consumed as a sequence, achieving state-of-the-art result:\n\non several syntactic parsing datasets. (2015) experiment on an artificial entailmen\ndataset that is specifically engineered to capture recursive logic (Bowman et al.|/2014). The text i\n\nannotated with brackets, in an original attempt to provide easy input into a recursive neural network\nHowever, standard recurrent neural networks can take in complete sentence sequences, bracket:\nincluded, and perform competitively with a recursive neural network.\nIn this section we will describe how we use a PN for the problem of extracting links between ACs.\nWe begin by giving a general description of the PN model."}, {"section_index": "3", "section_name": "3.1 POINTER NETWORK", "section_text": "A PN is a sequence-to-sequence model (Sutskever et al.| [2014) with attention (Bahdanau et al.\n\n2014) that was proposed to handle decoding sequences over the encoding inputs, and can be ex.\ntended to arbitrary sets . The original motivation for a pointer network wa:\nto allow networks to learn solutions to algorithmic problems, such as the traveling salesperson anc\nconvex hull, where the solution is a sequence over candidate points. The PN model is trained or\ninput/output sequence pairs (EZ, D), where E is the source and D is the target (our choice of ED i:\nmeant to represent the encoding, decoding steps of the sequence-to-sequence model). Given mode\nnarameters \u00a9, we apply the chain rule to determine the probability of a single training example:\np(D|E;) -I p(Di|D1, -.-; Di-1, E; \u00ae)\nThe PN uses Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber| 1997) for sequentia\nmodeling, which produces a hidden layer h at each encoding/decoding timestep. In practice, the PN\n\nhas two separate LSTMs, one for encoding and one for decoding. Thus, we refer to encoding hiddet\nlayers as e, and decoding hidden layers as d.\nThe PN uses a form of content-based attention (Bahdanau et al.|!2014) to allow the model to produce\n\na distribution over input elements. This can also be thought of as a distribution over input indices,\n\nwherein a decoding step \u2018points\u2019 to the input. Formally, given encoding hidden states (\u20ac1, ...,\u20acn),\nThe model calculates p(D;|Dj,...,D;_1, E) as follows:\nwhere the function m signifies that the number of decoding timesteps is a function of each individual\ntraining example. We will discuss shortly why we need to modify the original definition of m for\nour application. By taking the log-likelihood of Equation/I] we arrive at the optimization objective:\nO* = argmax ) > log p(D|E; \u00a9)\n\u00a9 ED\nuy = oP tanh(Wye,; + Wed;)\nFigure 2: Applying a Pointer Network to the example paragraph in Figure[I] with LSTMs unrolled\nover time.\np(D;|D1,...,Dj-1, E) = softmaz(u')"}, {"section_index": "4", "section_name": "3.2 LINK EXTRACTION AS SEQUENCE MODELING", "section_text": "where W;ep, brep in turn become model parameters, and a is the sigmoid function'} (similarly, the\ndecoding network applies a fully-connected layer with sigmoid activation to its inputs, see Figure\nBp. At encoding step i, the encoding LSTM produces hidden layer e;, which can be thought of as a\nhidden representation of AC C;.\nIn order to make the PN applicable to the problem of link extraction, we explicitly set the number of\ndecoding timesteps to be equal to the number of input components. Using notation from Equation{I]\nthe decoding sequence length for an encoding sequence E is simply m(E) = |{C\\, ..., Cn }|, which\nis trivially equal to n. By constructing the decoding sequence in this manner, we can associate\ndecoding timestep i with AC C;.\nFrom Equation|4| decoding timestep D,; will output a distribution over input indices. The result of\nthis distribution will indicate to which AC component C; links. Recall there is a possibility that an\nAC has no outgoing link, such as if it\u2019s the root of the tree. In this case, we state that if AC C;, does\nnot have an outgoing link, decoding step D; will output index 7. Conversely, if D; outputs index\nJj, such that 7 is not_equal to i, this implies that C; has an outgoing link to C;. For the argument\nstructure in Figure |1| the corresponding decoding sequence is (1,1,2,2). The topology of this\ndecoding sequence is illustrated in Figure [2] Note how C; points to itself since it has no outgoing\nlink.\nFinally, we note that we modify the PN structure to have a Bidirectional LSTM as the encoder. Thus,\nis the concatenation of forward and backward hidden states 2; and Cris produced by two\nseparate LSTMs. The decoder remains a standard forward LSTM.\nAt each timestep of the decoder, the network takes in the representation of an AC. Each AC is itself\na sequence of tokens, similar to the recently proposed Question-Answering dataset\n2015). We follow the work of|Stab & Gurevych|(2016) and focus on three different types of features\n\u2019We also experimented with relu and elu activations, but found sigmoid to yeild the best performance.\neT eau\n\nComponent1 Component2 | Component3 Component 4\nwhere matrices W;, W2 and vector v are parameters of the model (along with the LSTM parameters\nised for encoding and decoding). In Equation[3] prior to taking the dot product with v, the resulting\nransformation can be thought of as creating a joint, hidden representation of inputs \u00a2 and 7. Vector\nain equation|J]is of length n, and index 7 corresponds to input element j. Therefore, by taking the\nsoftmax of u\u2019, we are able to create a distribution over the input.\nA given piece of text has a set of ACs, which occur in a specific order in the text, (Cj, ...,C;,).\nTherefore, at encoding timestep i, the model is fed a representation of C;. Since the representation\nis large and sparse (see Section[3.3]for details on how we represent ACs), we add a fully-connected\nlayer before the LSTM input. Given a representation R; for AC C; the LSTM input A; becomes:\nAj = 0(WrepRi + brep)\nClaim Premise Premise Premise\n\nBi-LSTM\n\n1\ni\ni\ni\nL\nEL 4 ]\nI\nI\nI\nIl\n\n| Bidirectional LSTM Encoder\n\nee\n\nComponent 1 Component 2 \u2018Component 3 \u2018Component 4\nFigure 3: Architecture of the joint model applied to the example in Figure/I\nto represent our ACs: 1) Bag-of-Words of the AC; 2) Embedding representation based on GloV.\n\nembeddings (Pennington et al.||2014); 3) Structural features: Whether or not the AC is the first AC\n\nin a paragraph, and Whether the AC is in an opening, body, or closing paragraph. See Section|6]fo:\nan ablation study of the proposed features.\nUp to this point, we focused on the task of extracting links between ACs. However, recent wor!\nhas shown that joint models that simultaneously try to complete multiple aspects of the subtas!\n\npipeline outperform models that focus on a single subtask (Persing & Ng [2016} Stab & Gurevych\n2014b;|Peldszus & Stede| 2015). Therefore, we will modify the architecture we proposed in Sectio\n\nso that it would allow us to perform AC classification (Kwon et al.| {2007} [Rooney et al.| [2012\n\ntogether with link prediction. Knowledge of an individual subtask\u2019s predictions can aid in othe\nsubtasks. For example, claims do not have an outgoing link, so knowing the type of AC can aid it\nthe link prediction task. This can be seen as a way of regularizing the hidden representations fron\n\nthe encoding component (Che et al.| .\nPredicting AC type is a straightforward classification task: given AC C;, we need to predict whether\nit is a claim or premise. Some annotation schemes also include the class major claim (\n(Gurevychl (707%), which means this can be a multi-class classification task. For encoding timestep\n2, the model creates hidden representation e;. This can be thought of as a representation of AC C;.\nTherefore, our joint model will simply pass this representation through a fully connected layer as\nfollows:\nConsequently, the probability of predicting component type at timestep 7 is defined as:\nFinally, combining this new prediction task with Equation|2| we arrive at the new training objective:\nwhich simply sums the costs of the individual prediction tasks, and the second summation is the\ncost for the new task of predicting argument component type. a \u20ac [0,1] is a hyperparameter that\np(C;) = p(EilEis i)\n(Ei) Es, Ei; OQ) = softmazx(z;)\nO= argmaxa ) > log p(D|E; \u00a9) + (1\u2014- a) Slog p(E|\u00a9)\ne E,D E\nspecifies how we weight the two prediction tasks in our cost function. The architecture of the join\nmodel, applied to our ongoing example, is illustrated in Figure[3]"}, {"section_index": "5", "section_name": "4. EXPERIMENTAL DESIGN", "section_text": "As we have previously mentioned, our work assumes that ACs have already been identified. That\nis, the token sequence that comprises a given AC is already known. The order of ACs corresponds\ndirectly to the order in which the ACs appear in the text. Since ACs are non-overlapping, there\nis no ambiguity in this ordering. We test the effectiveness of our proposed model on a dataset of\npersuasive essays (Stab & Gurevych| {2016}, as well as a dataset of microtexts\nThe feature space for the persuasive essay corpus has roughly 3,000 dimensions, and the microtext\ncorpus feature space has between 2,500 and 3,000 dimensions, depending on the data split (see\nbelow).\nThe persuasive essay corpus contains a total of 402 essays, with a frozen set of 80 essays held out\nfor testing. There are three AC types in this corpus: major claim, claim, and premise. We follow the\ncreators of the corpus and only evaluate ACs within a given paragraph. That is, each training/test\nexample is a sequence of ACs from a paragraph. This results in a 1,405/144 training/test split. The\nmicrotext corpus contains 112 short texts. Unlike, the persuasive essay corpus, each text in this\ncorpus is itself a complete example. Since the dataset is small, the authors have created 10 sets of\n5-fold cross-validation, reporting the the average across all splits for final model evaluation. This\ncorpus contains only two types of ACs (claim and premise) The annotation of argument structure of\nthe microtext corpus varies from the persuasive essay corpus; ACs can be linked to other links, as\nopposed to ACs. Therefore, if AC C; is annotated to be linked to link J, we create a link to the source\nAC of I. On average, this corpus has 5.14 ACs per text. Lastly, we note that predicting the presence\nof links is directional (ordered): predicting a link between the pair C;,Cj(i \u00a2 j) is different than\nCj, Ci.\nWe implement our models in TensorFlow (Abadi et al.|/2015). Our model has the following param:\neters: hidden input dimension size 512, hidden layer size 256 for the bidirectional LSTMs, hidden\nlayer size 512 for the LSTM decoder, a equal to 0.5, and dropout (2014) of 0.9\nWe believe the need for such high dropout is due to the small amounts of training data (Zarrella &\n(2016), particularly in the Microtext corpus. All models are trained with Adam optimize\n(Kingma & Bal (2014) with a batch size of 16. For a given training set, we randomly select 10% tc\nbecome the validation set. Training occurs for 4,000 epochs. Once training is completed, we select\nthe model with the highest validation accuracy (on the link prediction task) and evaluate it on the\nheld-out test set. At test time, we take a greedy approach and select the index of the probability\ndistribution (whether link or type prediction) with the highest value."}, {"section_index": "6", "section_name": "5 RESULTS", "section_text": "The results of our experiments are presented in Tables [I] and [2] For each corpus, we present fl\nscores for the AC type classification experiment, with a macro-averaged score of the individual\nclass fl scores. We also present the fl scores for predicting the presence/absence of links between\nACs, as well as the associated macro-average between these two values.\nWe implement and compare four types of neural models: 1) The previously described PN-based\nmodel depicted in Figure |3| (called PN in the tables); 2) The same as 1), but without the fully-\nconnected input layers; 3) The same as 1), but the model only predicts the link task, and is therefore\nnot optimized for type prediction; 4) A non-sequence-to-sequence model that uses the hidden layers\nproduced by the BLSTM encoder with the same type of attention as the PN (called BLSTM in the\ntable). That is, d; in Equation[B]is replaced by e;.\nIn both corpora we compare against the following previously proposed models: Base Classifier\nis feature-rich, task-specific (AC type or link extraction) SVM classifier.\nNeither of these classifiers enforce structural or global constraints. Conversely, the ILP Joint Model\n(Stab & Gurevych| provides constrains by sharing prediction information between the base\nclassifier. For example, the model attempts to enforce a tree structure among ACs within a given\nparagraph, as well as using incoming link predictions to better predict the type class claim. For the\nTable 1: Results on persuasive essay corpus\nTable 2: Results on microtext corpus.\nType prediction\n\nLink prediction\n\nModel Macro fl | Clfl ] Prfl || Macro fl | Link fl | No Link fl\nSimple 817 - - 663 ATB 848\nBest EG 869 - - 693 502 884\nMP+p 831 - - 720 546 894\nBase Classifier 830 -712 | .937 650 446 841\nILP Joint Model 857 710 | .943 683 486 881\nPN 813 692 | .934 .740 577 903\nmicrotext corpus only, we have the following comparative models: Simple (Peldszus & Stede|\nis a feature-rich logistic regression classifier. Best EG (Peldszus & Stede|/2015) creates an Evidence\nGraph (EG) from the predictions of a set of base classifier. The EG models the potential argument\nstructure, and offers a global optimization objective that the base classifiers attempt to optimize by\n\nadjusting their individual weights. Lastly, MP+p (Peldszus & Stede| |2015) combines predictions\n\nfrom base classifiers with a MSTParser, which applies l-best MIRA structured learning."}, {"section_index": "7", "section_name": "6 DISCUSSION", "section_text": "First, we point out that the PN model achieves state-of-the-art on 10 of the 13 metrics in Tables\nand 2} including the highest results in all metrics on the Persuasive Essay corpus, as well as link\nprediction on the Microtext corpus. The performance on the Microtext corpus is very encouraging\nfor several reasons. First, the fact that the model can perform so well with only a hundred training\nexamples is rather remarkable. Second, although we motivate the use of a PN due to the fact that\nit partially enforces the tree structure in argumentation, other models explicitly contain further con-\nstraints. For example, only premises can have outgoing links, and there can be only one claim in an\nAC. As for the other neural models, the BLSTM model performs competitively with the ILP Joint\nModel on the persuasive essay corpus, but trails the performance of the PN model. We believe this\nis because the PN model is able to create two different representations for each AC, one each in the\nencoding/decoding state, which benefits performance in the dual tasks, whereas the BLSTM model\nmust encode information relating to type as well as link prediction in a single hidden representation.\nOn one hand, the BLSTM model outperforms the ILP model on link prediction, yet it is not able to\nmatch the ILP Joint Model\u2019s performance on type prediction, primarily due to the BLSTM\u2019s poor\nperformance on predicting the major claim class. Another interesting outcome is the importance of\nthe fully-connected layer before the LSTM input. The results show that this extra layer of depth is\ncrucial for good performance on this task. Without it, the PN model is only able to perform com-\npetitively with the Base Classifier. The results dictate that even a simple fully-connected layer with\nsigmoid activation can provide a useful dimensionality reduction for feature representation. Finally,\nthe PN model that only extracts links suffers a large drop in performance, conveying that the joint\naspect of the PN model is crucial for high performance in the link prediction task.\nTable [3] shows the results of an ablation study for AC feature representation. Regarding link pre-\ndiction, BOW features are clearly the most important, as their absence results in the highest drop ir\nperformance. Conversely, the presence of structural features provides the smallest boost in perfor-\nmance, as the model is still able to record state-of-the-art results compared to the ILP Joint Model\nThis shows that, one one hand, the PN model is able to capture structural ques through sequence\nType prediction\n\nLink prediction\n\n! Model Macro fl | MCfl |] Clfl ] Prfl || Macro fl | Link fl | No Link fl\nBase Classifier 794 891 611 | .879 117 508 17\n| ILP Joint Model 826 891 .682 | .903 751 585 918\n| BLSTM .810 .830 688 | .912 754 589 919\n| PN No FC Input 791 826 .642 | .906 708 514 901\n| PN No Type - - - - 709 511 906\n| PN 849 894 | .732 | .921 .767 608 925\nTable 3: Feature ablation study. * indicates that both BOW and Structural are present, as well as th\nstated embedding type.\nTable 4: Results of binning test data by length of AC sequence. * indicates that this bin does not\ncontain any major claim labels, and this average only applies to claim and premise classes. However,\n\nwe do not disable the model from predicting this class: the model was able to avoid predicting this\nclass on its own.\nshows the results on the Persuasive Essay test set with the examples binned by sequenc\nlength. First, it is not a surprise to see that the model performs best when the sequences are th\nshortest. As the sequence length increases, the accuracy on link prediction drops. This is possibl\ndue to the fact that as the length increases, a given AC has more possibilities as to which other AC |\ncan link to, making the task more difficult. Conversely, there is actually a rise in no link predictio\naccuracy from the second to third row. This is likely due to the fact that since the model predicts <\nmost one outgoing link, it indirectly predicts no link for the remaining ACs in the sequence. Sinc\nthe chance probability is low for having a link between a given AC in a long sequence, the no lin\nperformance is actually better in longer sequences."}, {"section_index": "8", "section_name": "7 CONCLUSION", "section_text": "In this paper we have proposed how to use a modified PN (Vinyals et al.|{2015b) to extract links\n\nbetween ACs in argumentative text. We evaluate our models on two corpora: a corpus of persuasive\nessays 2016), and a corpus of microtexts eeldszus 2014). The PN model records\nstate-of-the-art results on the persuasive essay corpus, as well as achieving state-of-the-art results\nfor link prediction on the microtext corpus, despite only having 90 training examples. The results\nshow that jointly modeling the two prediction tasks is crucial for high performance, as well as the\npresence of a fully-connected layer prior to the LSTM input. Future work can attempt to learn the\nAC representations themselves, such as in {Kumar et al. (2015). Lastly, future work can integrate\nsubtasks | and 4 into the model. The representations produced by Equation [3] could potentially\nbe used to predict the type of link connecting ACs, i.e. supporting or attacking; this is the fourth\nsubtask in the pipeline. In addition, a segmenting technique, such as the one proposed by [Westor\net al.|(2014), can accomplish subtask 1.\nType prediction Link prediction\nModel Macro fl | MC fl | Clfl | Prfl || Macro fl | Link fl | No Link fl\nNo structural -808 824 | .694 | .907 -760 598 922\nNo BOW 196 .833 652 | .902 -728 543 912\nNo Embeddings 827 874 | 695 | 911 -750 S81 918\nOnly Avg Emb* 832 873 17 | 917 751 583 918\nOnly Max Emb* 843 874 | .732 | .923 -766 608 924\nOnly Min Emb* -838 878 719 | .918 -763 602 924\nAll features 849 894 | .732 | .921 .767 -608 925\nType prediction\n\nLink prediction\n\nBin Macro fl | MC fl | Clfl | Prfl || Macro fl | Link fl | No Link fl\nl<len<4 863 902 .798 | .889 918 866 .969\n4<len<8 680 444 .675 | .920 749 586 912\n8 <len < 12 .862* .000* | .762 | .961 742 542 941\nmodeling and semantics (the ILP Joint Model directly integrates these structural features), howevet\nthe PN model still does benefit from their explicit presence in the feature representation. When con-\nsidering type prediction, both BOW and structural features are important, and it is the embedding\nfeatures that provide the least benefit. The Ablation results also provide an interesting insight into\nthe effectiveness of different \u2018pooling\u2019 strategies for using individual token embeddings to create a\nmulti-word embedding. The popular method of averaging embeddings (which is used by |Stab &\nGurevych| in their system) is in fact the worst method, although its performance is still com-\npetitive with the previous state-of-the-art. Conversely, max pooling produces results that are on pat\nwith the PN results from Table[I]"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl\nlearning to align and translate. arXiv preprint arXiv: 1409.0473, 2014.\nSamuel R Bowman, Christopher Potts, and Christopher D Manning. Recursive neural networks cat\nlearn logical semantics. arXiv preprint arXiv: 1406.1827, 2014.\nSamuel R Bowman, Christopher D Manning, and Christopher Potts. Tree-structured composition in\nneural networks without tree-structured architectures. arXiv preprint arXiv: 1506.04834, 2015.\nAmparo Elizabeth Cano-Basave and Yulan He. A study of the impact of persuasive argumentation\nin political debates. In Proceedings of NAACL-HLT, pp. 1405-1413, 2016.\nZhengping Che, David Kale, Wenzhe Li, Mohammad Taha Bahadori, and Yan Liu. Deep compu-\ntational phenotyping. In Proceedings of the 21th ACM SIGKDD International Conference on\nKnowledge Discovery and Data Mining, pp. 507-516. ACM, 2015.\nTrudy Govier. A practical study of argument. Cengage Learning, 2013.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 9(8):\n1735-1780, 1997.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint\narXiv:1412.6980, 2014.\nHuy V Nguyen and Diane J Litman. Context-aware argumentative relation mining. 2016.\nRaquel Mochales Palau and Marie-Francine Moens. Argumentation mining: the detection, classifi\ncation and structure of arguments in text. In Proceedings of the 12th international conference o\nartificial intelligence and law, pp. 98-107. ACM, 2009.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.\nCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew\nHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath\nKudlur, Josh Levenberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,\nMike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-\ncent Vanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Martin Watten-\nberg, Martin Wicke, Yuan Yu, and Xiaogiang Zheng. TensorFlow: Large-scale machine learning\n\non heterogeneous systems, 2015. URL http: //tensorflow.org/| Software available from\ntensorflow.org.\nAlex Graves and Jiirgen Schmidhuber. Offline handwriting recognition with multidimensional re-\ncurrent neural networks. In Advances in neural information processing systems, pp. 545-552,\n2009.\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for worc\nrepresentation. In EMNLP, volume 14, pp. 1532-43, 2014.\nAnthony J Robinson. An application of recurrent nets to phone probability estimation. [EEE\ntransactions on Neural Networks, 5(2):298\u2014305, 1994.\nNiall Rooney, Hui Wang, and Fiona Browne. Applying kernel methods to argumentation mining. Ir\nFLAIRS Conference, 2012.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks\nIn Advances in neural information processing systems, pp. 3104-3112, 2014.\nIsaac Persing and Vincent Ng. End-to-end argumentation mining in student essays. In Proceedings\nof NAACL-HLT, pp. 1384-1394, 2016."}]
ryTYxh5ll
[{"section_index": "0", "section_name": "CONTENT2VEC: SPECIALIZING JOINT\nREPRESENTATIONS OF PRODUCT IMAGES AND TEXT\nFOR THE TASK OF PRODUCT RECOMMENDATION", "section_text": "Thomas Nedelec, Elena Smirnova & Flavian Vasile\nft .nedelec,e.smirnova, f.vasile}@criteo.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Online product recommendation is now a key driver of demand, not only in E-commerce businesses\nthat recommend physical products, such as Amazon (Marshall] 2006), TaoBao 2013) and\nEbay (Academy}|2013), but also in online websites that recommend digital content such as news\n\n, Google - (2010)), movies (Netflix -[Bell & Koren|(2007)),\n\n), videos (YouTube - games (Xbox -\nTwo of the most challenging aspects of recommendation in general and of product recommendatio\nin particular, are scalability and freshness. The first one addresses the problem of making fast rec\nommendations in parallel, the second addresses the problem of updating recommendations based o\nreal-time user interaction. One of the most encountered architecture solutions for recommendatio\nat scale divides the recommendation process in two stages: a candidate generation stage that prune\nthe number of recommendable items from billions to a couple of hundreds, followed by a secon\n\nitem selection stage that decides the final set of items to be displayed to the user, as shown in Figur\nAicae!Mazarel f9016h Chane et al 9014) IPauinotan etal Wan14h)\n[he first stage generally implies the pre-generation of an inverted index over the set of recommend\nible products, paired with a real-time retrieval module, similarly to a search engine architecture\nn our current paper we focus on the cases where the system supports vectorial product querie:\n[he sources of the vectorial representations range from the set of co-occurring products, like in th\ncase of neighborhood-based collaborative filtering, to a low-dimensional representation produce\n7ia matrix factorization or to an embedded representation produced via a deep neural network.\nThe second stage takes the candidate set and decides the final list of recommendations, usually by\noptimizing a ranking metric. This stage has in general a lot more constraints in terms of latency, due\nto its use of real-time signal that makes its predictions not cacheable. Therefore, in terms of model\nchoice, the first stage can be a lot more complex than the second. In terms of impact, the quality of\nthe candidate set coming from the first stage is crucial, since this constitutes a hard threshold on the\nperformance of the second stage and of the overall system.\nBecause of the feasibility of using a more complex model and the potential impact on the final\nrecommendation performance, we choose to concentrate our efforts on the task of optimal candi-"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We propose a unified product embedded representation that is optimized for the\ntask of retrieval-based product recommendation. We generate this representation\nusing Content2Vec, a new deep architecture that merges product content infor-\nmation such as text and image, and we analyze its performance on hard recom-\nmendation setups such as cold-start and cross-category recommendations. In the\ncase of a normal recommendation regime where collaborative information signal\nis available, we merge the product co-occurrence information and propose a sec-\nond architecture Content2vec+ and show its lift in performance versus non-hybrid\napproaches in both cold start and normal recommendation regimes.\nRecommended\nitems\n\nStage 2:\nFinal recommendation\nitem set generation\n\nRanking\n\nItem\nrepresentations\n\nBuild / update\nindex job\n\nStage 1: Candidate item set generation\n\nItem\ninverted index\n\nCandidate\nitems\n\nRetrieval service\nie\n\nItem\nrepresentations\n\nCreme 4\u00a2 Camuticlnte learn ease ms\nFigure 1: 2-Stage Recommender System Architecture.\ndate generation. We formalize the problem as a link prediction task, where given a set of past\nco-purchased products we try to predict unseen pairs of products. Related work in representation\nlearning for recommendation investigated the use of collaborative filtering (CF), text and product\nimages, but to our knowledge, there has been no attempt to unify all of these signals in a single rep-\nresentation. We see this as an opportunity to investigate the leveraging effect of generating a Unified\nProduct Representation via a deep-learning approach. In the following, we formally define the set\nof associated requirements we would like to satisfy:\nRelevance: the representation should be optimized for product recommendation relevance\nas measured by the associated target metrics (in this case, modeling it as a link predictiot\ntask and optimizing for the AUC of product pair prediction).\n\nCoverage: the representation should leverage all available product information (in ou\ncase, all product information available in the product catalog together with observed prod\nuct co-occurrences).\n\nCross-modality expressiveness: the representation should be able to account for interac\ntions between various information sources such as text and image (can take into accoun\nthe fact that the word \u201dred\u201d and the \u2019red\u201d color detector are correlated).\n\nPair-wise expressiveness: the representation should be able to account for interaction:\nbetween the two products.\n\nRobustness: the representation should operate well (recommendation performance will no\ndegrade dramatically) in hard recommendation situations such as product cold-start (nev\nproducts, new product pairs) and cross-category recommendation. These are importan\nuse-cases in product recommendation, when the product catalog has high churn (as in th\ncase of flash sales websites or classifieds) or the recommendation needs to leverage cross\nadvertiser signal (as in the case of new users and user acquisition advertising campaigns)\nThis is a different goal from simply trying to optimize for relevance metrics, due to th\ninherent limitations of offline metrics in predicting future online performance.\n\nRetrieval-optimized: the representation should be adapted to a content-retrieval setup\nboth on the query and on the indexing side, meaning that the vectors should be eithe\n\nemall cnarce ar hath\nWe propose a modular deep architecture that leverages state-of-the-art architectures for generating\nembedded representations for image, text and CF input, re-specializes the resulting product em-\nbeddings and combines them into a single product vector. This is a very general architecture that\ncan plugin any networks in the image and text domain and re-use them for the problem of product\nrecommendation, along with their gains in representation learning for the two domains. We investi-\ngate multiple ways of merging the modality-specific product information and propose a new type of\nresidual-inspired unit, which we name Pairwise Residual Unit, that can model the joint aspects of\nthe different product embeddings and show that it leads to good improvements.\nWe analyze our proposed architecture on an Amazon dataset (McAuley et al.| 2015) containing\n\ninformation on co-purchased products. We report our improvements versus a text and an image-\nbased baseline, that was introduced in previous work by (cite Julian) and show improvements bott\non normal and hard recommendation regimes such as cold-start and cross-category setups.\nOur approach is similar to the recent work by (Covington et al.|/2016), that propose a solution fot\nvideo recommendation at YouTube. Unlike their proposed solution, where, in order to support uset\n\nvector queries, the candidate generation step co-embeds users and items, we are interested to co-\nembed just the product pairs, which generally has a much smaller dimension. In our approach, the\npersonalization step can happen after the per-item candidates are retrieved.\nOur main contributions are the following:\nThough the focus of our work is on improving product recommendation through representation\nlearning, we believe that simple extensions of our approach can be applied to many other recom-\nmendation scenarios.\nThe rest of the paper goes as follows: In Section [2] we cover previous related work and the rela-\ntionship with our method. In Section] we present the Content2Vec model, followed by a detailed\ndescription of the resulting architecture in Seton In Section[5]we present the experimental setup\n\nand go over the results on Section 5.2] In Section|6]we summarize our findings and conclude with\nfuture directions of research."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Several approaches use neural networks to build better item representations based on the co-\noccurrence matrix. The Prod2Vec algorithm (see (Grbovic et al.] |2015)) implements Word2Vec\n((Mikolov et al.| (2013a), [2016)), an algorithm that is at origin a shallow neural\n\nlanguage model, on sequences of product ids, to reach a low-dimensional representation of each\nproduct. Among other embedding solutions that use the item relationship graph are the more recent\n\nextensions to Word2Vec algorithm such as Glove (Pennington et al.| 2014) and SWIVEL (\n2016) and the graph embedding solutions proposed in Node2Vec (Grover & Leskovec )\nand SDNE (Wang et al.|/2016).\nContent-based methods recommend an item to a user based upon an item description and a user\nprofile (Pazzani & Billsud [2007)). This idea was deeply investigated in the information retrieval\nliterature: in the context of web search, DSSM (Huang et al. 2013) and its extensions\n2014)(C-DSSM) and (Shan et al.|/2016) are some of the most successful methods that specialize\ne We propose a novel way of integrating deep-learning item representation in the context \u00ab\nlarge scale recommender system with a 2-stage serving architecture and introduce the ne\ntask of Unified Product Representation for optimal candidate selection in both cold sta\nand normal recommendation setups.\n\ne We introduce a new deep architecture that merges content and CF signal for the task \u00ab\nproduct recommendation and propose the Pairwise Residual Unit, a new learning comp\u00ab\nnent that models the joint product representations.\n\ne We introduce two novel experimental setups (hard cold start, cross-category) and test th\nthe proposed Content2Vec architecture satisfies the requirements we defined.\nOur work fits in the new wave of deep learning based recommendation solutions, that similarly to\nclassical approaches can fall into 3 categories, namely collaborative filtering based, content based or\nhybrid approaches.\nquery and document text embedding in order to predict implicit feedback signal such as docume:\nclick-through rate. In the context of product recommendation, in (McAuley et al.|/2015) the autho\nfeed a pre-trained CNN (CNN trained on the ImageNet dataset, which is an image classification tas\nthat is very different from the task of image-based product recommendation) with products imag\u00ab\nand use the last layer of the network as the product embedding. This representation is subsequent\nused to compute similarities between products. Similarly, the authors in (Van den Oord et al.|/20T.\nuse CNNs to compute similarities between songs. |Yosinski et al.](2014) show that the low laye:\nof DNNs trained on different tasks are often similar and that good performance can be reached b\nfine-tuning a network previously trained on another task. In the case of recommendation system\nthis fine tuning was implemented in{Veit et al.](2015), where the authors specialize a GoogLeN\narchitecture to the task of predicting co-view events based on product pictures.\nThe performance of Collaborative Filtering (CF) models is often higher than that of content-based\nones but it suffers from the cold-start problem. To take advantage of the best of both worlds, hybrid\nmodels use both sources of information in order to make recommendations. One possible way to\nincorporate product information is using it as side information in the product sequence model, as\nproposed in Meta-Prod2Vec 2016), leading to better product embeddings for products\nwith low signal (low number of co-occurrences). In this work we continue the investigation of using\nboth types of signal, this time both at training and product recommendation time.\nOur proposed approach takes the idea of specializing the input representations to the recommenda-\ntion task and generalizes it for multi-modality inputs, in order to leverage all product information\nand in particular, product images and product title and description text.\nThe main criteria for the Content2Vec architecture is to allow us to easily plugin new sources of\nsignal and to replace existing embedding solutions with new versions. We are also interested in\nseparating product-level embeddings from pair-level embeddings, such that the network can generate\nproduct vectors that are readily indexable. As a result, the Content2Vec architecture has three types\nof modules, as shown in Figure|2}\nIn the second stage, we stack the modality-specific embeddings generated in the first stage into a\ngeneral product vector and learn an additional residual vector using the same learning objective as\nin the specialization step. This will described in depth in Section\nFinally, in the third stage, given the updated product vectors from stage two, we learn the linear\ncambhinatinn hetuyeen the cimilaritiec nf the nraduect vectare and make the final nredictinn"}, {"section_index": "4", "section_name": "3.1 Loss FUNCTION", "section_text": "The previous work on learning pair-wise item distances concentrated on using ranking (McFee &\n\n[Lanckriet} /2010), siamese (Hadsell et al.}/2006) or logistic loss (Zheng et al.|[2015). For optimizing\n\nthe link prediction objective we choose the logistic similarity loss (eq. |1) that has the advantage of\ne Content-specific embedding modules that take raw product information and generate th\nassociated vectors. In this paper we cover embedding modules for text, image, categorica\nattributes and product co-occurrences (for an example, see Figure [3p.\n\ne Overall product embedding modules that merge all the product information into a unifie\nproduct representation.\n\ne Pair embedding module that merges the product-to-product interactions and computes th\nfinal similarity score. In the case of retrieval-optimized product embeddings, this modul\nbecomes the inner-product between the two items and all interactions between them are t\nbe approximated within the product-level embedding modules.\nLOMents VOC Udine 1OHOWS Ue arCHitecture, CalMins MOCUIe-DY-MOoauie, ih WIC HESt slaze, WC\ninitialize the content-specific modules with embeddings from proxy tasks (classification for image,\nlanguage modeling for text) and re-specialize them to the task of product recommendation. For the\nspecialization task, as mentioned in Section{]] we frame the objective as a link prediction task where\nwe try to predict the pairs of products purchased together. We describe the loss function in Section\n\nIn the second stage, we stack the modality-specific embeddings generated in the first stage into a\nPair Embedding\n\nImage Image\nEmbedding we Embedding\nModule Module\n\nContent-specific Embedding\nPair Embedding\nFigure 2: Content2Vec architecture combines content-specific modules with residual vector to pro-\nduce embedding vector for each product, then uses these vectors to compute similarities between\nproducts.\nhaving a fast approximation via Negative Sampling loss (Mikolov et al.||2013b) shown in eq. |2} By\n\nusing Negative Sampling, the prediction step can scale up to large number of items, by using al\npositive pairs and sampling the negatives on the fly.\nbj)\n(\u2014sim(a\nG loga\n\nNE\n\nXij\n\nis b;))\n\n(sim(ai,\n\n5\n\n5 log\n\nXi};\n\nL(6) = S\nk\nLys(0) = Y0 \u2014Xf;O* (log o(sim(a;,;)) + S> En. wp log o(\u2014sim(a;,m)))\n\nif l=1\nContent-specific modules can have various architectures and are meant to be used separately in order\nto increase modularity. Their role is to map all types of item signal into embedded representations.\nImage Vector Text Vector Image Vector\n\nText\nEmbedding\nModule\n\nImage\nEmbedding\nModule\n\nImage Text\nEmbedding || Embedding\nModule Module\n\nTitle: \u201c7 Samurof\u201d\nDescriptio! f\n\nTitle: \u201cThe Art/of War\u201d\n\nProduct A: Product B:\n\u201cThe Art of War\u201d - book \u201cSeven Samurai\u201d - movie\nFigure 3: An example of using the content-specific modules to create embedded representations of\ntwo products with images, text and CF signal.\nIn the following we analyze four types of input signal and embedding solutions for each one of them\nFor all of the modules, we use Lng loss (see eq. [2) as specialization loss.\nModel and proxy task: CNN for Image Classification For generating the image embeddings we\npropose reusing a model trained for image classification, as in previous work by (Krizhevsky et al.\n2012) and (He & McAuley}|2015). In (He & McAuley| 2015p, the authors have shown how to us\u00a2\nthe Inception architecture (Szegedy et al. 2015) and specialize it for the product recommendatiot\ntask. However, the Inception architecture is very deep and requires extensive training time. For ease\nof experimentation we use AlexNet, which is a simpler architecture that was also a winner on the\n[mageNet task previously to Inception NN. In section[5.2] we will sho\nthat, even if simpler, when combined with additional product text information, the AlexNet-basec\nsolution can perform very well on the recommendation task.\n\nFor our experiments, we use the pretrained version of AlexNet available on Toronto\u2019s university\nwebsite. We experimented with two different ways to specialize the representation in order to com\npute product similarities. In the first one, we learn a weighted inner product between the two repre\nsentations (fc7 layer of ImageNet). In the second one, we specialize the fc7 layer to detect produc\nsimilarities. The second approach lead to much better performance and is the one for which we"}, {"section_index": "5", "section_name": ".1.2 EMBEDDING PRODUCT TEXT: WORD2VEC AND CNN ON SENTENCES", "section_text": "Model and proxy task: Word2Vec for Product Language Modeling For generating word em-\nbeddings, we propose reusing Word2Vec|Mikolov et al.|(2013b), a model for generating language\n\nmodels that has been employed in a various of text understanding tasks. More recently, it has been\nshown in (Pennington et al. that Word2Vec is closely linked with matrix factorization tech-\nniques applied on the word co-occurrence matrix. For Content2Vec, we chose to pretrain Word2Vec\nTitle: \u201c7 Samuraf\u201d\nProduct B:\n\u201cSeven Samurai\u201d - movie\non the entire product catalog text information and not use an available set of word embeddings such\nas the one created on the Google Corpus. The main reason is that the text distribution within product\ndescriptions is quite different from the general distribution. For example the word \u2018jersey\u2019 has a very\ndifferent conditional distribution within the product description corpus versus general online text.\n(2014) offers a simple solution for sentence-level embeddings using convolutions.\nThe convolutions act as a form of n-gram filters, allowing the network to embed sentence-level\ninformation and specializing word embeddings to higher-order tasks such as text classification or\nsentiment analysis. To the best of our knowledge, this is the first attempt to employ them for the\ntask of product recommendation. For our task, we generate sentences based on the product titles and\ndescriptions."}, {"section_index": "6", "section_name": "1.1.3. EMBEDDING PRODUCT CO-OCCURRENCES: PROD2 VEC", "section_text": "Prod2Vec 2 is an extension of the Word2Vec algorithm to product shopping\nsequences. As a result, Prod2Vec can be seen as a matrix factorization technique on the product\nco-occurence matrix. In Content2Vec, the Prod2 Vec-based similarity contains all of the information\nthat can be derived from the sequential aspect of the user behavior, without taking into account the\nper-product meta-data."}, {"section_index": "7", "section_name": "4.2 JOINT PRODUCT EMBEDDING: PAIRWISE RESIDUAL UNIT", "section_text": "As stated in Section|I} the function of the product embedding module is two-fold: first, to model\nall interactions that exist between the modality-specific embeddings with respect to the final opti-\nmization objective, and second, to approximate interaction terms between the products that cannot\nbe explained by a linear combination of the modality-specific similarities. With this in mind, we\nintroduce a new type of learning unit, the Pairwise Residual Unit (eq. which similarly to the\noriginal residual unit introduced in (eq. [3), allows the layers to learn incremental.\ni.e. residual representations (see Figure]4).\n\nIn|Hardt & Mal ) the authors motivate the use of residual units as helping preserve the repre-\nsentations learned in the previous layers. In our case we are interested in preserving the specialized\nimage and text representations and learn an additional representation for their interactions. Though\nin previous work, most the of the residual units are using at least two ReLU layers in the residual\nunit, we observe good results using just one. In order to model interactions between modalities, we\ncould also learn a fully connected layer initialized with identity that takes as input the concatenated\nmodality-specific vectors. However, in order to have a smaller number of parameters and increase\nmodel comprehensibility, we would like to keep separate the modality-specific representations and\nto model the final prediction model as an ensemble.\ny = sim(F(a1), F(x2)) + sim(a1, 22)\nTo be able to measure the incremental value of introducing a residual vector we introduce a baseline\narchitecture that computes the final prediction based on the linear combination of the modality-\nspecific similarities denoted by Content2Vec-linear with the associated similarity function defined\nMeta-Prod2Vec (Vasile et al.|/2016) improves upon Prod2Vec by using the product meta-data side\ninformation to regularize the final product embeddings. In Content2Vec, we can use the similar tech-\n\nnique of co-embedding product categorical information with product ids to generate the embedding\nvalues for the categorical features.\nsimeay (ai, bj) = Ss Wm (SiMm (ai, b;))\nm\u20ac Modalities\nm\u00e9\u20ac(Modalities+ Residual)\nIn order to learn the residual vector, we keep fixed the modality-specific similarities and co-train\nthe final weights of each of the modalities together with the product-specific residual layers. For\nexample, in the case of using only image and text signals, our final predictor can be defined as in eq\n7| where P;.; and Pjmg are pre-set and Wit, Wimg, Wres and P,\u00a2s, are learned together:"}, {"section_index": "8", "section_name": "4.3. PAIR EMBEDDING MODULE", "section_text": "In a retrieval-based architecture, the pair embedding module cannot support more than a simpl\nlinear combination of the product embedding vectors, such that the final score can be compute\nvia inner-product. However, we are still interested to know the trade-off in performance between a1\ninner-product-based candidate scoring and a model that allows for explicit interaction terms betwee!\nthe items. To this end, we introduce two explicit interaction models: Content2Vec-crossfeat - :\nF(X) SIM( X1, X2 ) SIM( F(X1), F(X2) )\n+ +\nx X1, X2\n\u00a5 \u00a5\n. .\nResidual Unit Pairwise Residual Unit\n\nFigure 4: Pairwise Residual Unit\n\nsimery (ai, bj) = > WmO (sim (ai, b;))\n\nme\u20ac Modalities\nUnder this notation, the residual-based architecture denoted as Content2Vec-res minimizes Lys\nwith the similarity function defined in eq. |6|\nP(pos\\a, b) = o(WeetPrrt(pos|arct, Dect) + WimgPimg(p0s|@img; bimg) + WresPres(p0s|Gres; Dres\nSIM(X1, X2 ) SIM( F(X1), F(X2) ) SIM( X1, X2 ) F(X1, X2)\nX1, X2 X1, X2\ne e\n\nPairwice Recidual LInit \u2018ce Racidual | init\nFigure 5: The two types of Pairwise Residual Units. By comparison with the first version that\noutputs a scalar, the second one outputs a vector that goes directly into the final prediction layer\nmodel where we discretize the text and image-specific similarity scores and create explicit feature\nconjunctions between them and Content2 Vec-embedpairs - a model where we use a similar technique\nwith Paiwise Residual Unit, in this case modeling the residual of the linear similarity directly as a\nvector in the pair embedding layer, as shown in Figure|5} In Section we show that two models\nhave as expected better performance than the linear model and that the pair embedding is slightly\nbetter."}, {"section_index": "9", "section_name": "5.1 DATASET", "section_text": "We perform our evaluation on the publicly available Amazon dataset (McAuley et al.|/2015) tha\nrepresents a collection of products that were co-bought on the Amazon website. Eacl item has \u00a2\nrich description containing product image, text and category (any of the modalities can be missing)\nIn terms of dimensionality, the dataset contains around 10M pairs of products. We concentrate o1\nthe subgraph of Book and Movie product pairs, because both categories are large and they havi\na reasonable sized intersection. This allows us to look at recommendation performance on cross\ncategory pairs (to evaluate a model trained only on Book pairs on predicting Movie co-bought items\nand mixed category pairs (to evaluate the models on Book-Movie product pairs).\nBased on the full Book & Movies data we generate three datasets with different characteristics:\nThe first dataset simulates a hard cold start regime, where all product pairs used in validation and\ntesting are over products unseen in training. This tests the hardest recommendation setup, where all\ntesting data is new. We decided to bench all of our hyperparameters on this regime and use the best\nsetup on all datasets, since tuning on the harder dataset ensures the best generalization error (results\nshown in Table[I).\n\nThe second dataset simulates a non-cold start regime, where the vast majority of the products in the\ntest set are available at training time. The dataset is generated by taking the top 100k most connected\nproducts in the original dataset and keeping the links between them (results shown in Table[2).\n\nThe third dataset simulates a soft cold start regime, where some of the products in the test set are\navailable at training time. The dataset is generated by taking the top 200k most connected products\nin the original dataset and sampling 10% of the links between them (results shown in Table/3).\nHyper-parameters We fixed the sizes of embedding vectors for image CNN module to 4096\nhidden units, for text CNN module to 256, for Prod2Vec module to 50, for residual representation\nto 128. For optimization we use an Adam algorithm and we manually set the initial learning rate\nbased on the validation set performance. The batch sizes vary for different datasets. We train all the\nmodels until validation set performance stops increasing.\nEvaluation metrics For the link prediction task, we use the Area Under Curve (AUC) of the\nPrecision/Recall curve as our evaluation metric."}, {"section_index": "10", "section_name": "5.2 RESULTS", "section_text": "Evaluation task We evaluate the recommendation methods on the product link prediction task,\nsimilar to [2015). We consider the observed product pairs as positive examples\nand all unknown pairs as negatives. We generate negative pairs according to the popularity of the\nproducts in the positive pairs (negative examples between popular products are more likely to be\ngenerated) with a positive to negative ratio of 1:2.\nImageCNN: prediction based on specialized image embeddings similarity\n\nTextCNN: prediction based on specialized text embeddings similarity\n\nContent2Vec-linear: prediction based on the linear combination of text and image similar-\nities\n\nContent2Vec-crossfeat: prediction based on the linear combination of discretized image\nand text similarities and their conjuctions\n\nContent2Vec-res: prediction based on the linear combination of text and image similarities\nplus product-level residual vectors similarities\n\nContent2Vec-embedpairs: prediction based on the linear combination of text and image\nsimilarities and a pair-level residual component\n\nProd2Vec: prediction based on the product vectors coming from the decomposition of the\nco-purchase matrix\n\nContent2Vec+: prediction based on the ensemble of Prod2Vec and Content2Vec models\nThe results on hard and soft cold start datasets (Tables ) show that our main proposed method\nContent2Vec-res can leverage the additional signal provided by each of the input modalities in a\njoint manner and leads to significant gains in AUC versus the one-signal baselines (ImageCNN,\nTextCNN) and their linear combination (Content2Vec-linear).\n\nFrom the point of view of robustness, Content2Vec-res learns product representations that perform\nbetter than the baseline methods on out-of-sample recommendations such as cross-category pairs\nand mixed-category pairs (Table[Ip.\n\nWe observe that adding an additional layer that represents pair-level interactions does not lead to\nbig improvements in either of the two models we investigated (Content2 Vec-crossfeat,embedpairs),\nconfirming that a product retrieval-based recommender system can achieve state-of-the-art results.\nFinally, Content2Vec-res+, our proposed hybrid architecture that combines content and CF signal\nachieves better performance than the content and CF-only models, with bigger lifts in the case of\nthe third dataset (Table/3) where the CF signal is weaker due to higher sparsity.\nRecommendation Model Books | Movies | Mixed\nModels trained on Books dataset\n\nBook ImageCNN specialized 81% 78% 64%\nBook TextCNN 712% 19% 716%\nBook Content2 Vec-linear 83% 83% 16%\nBook Content2 Vec-crossfeat 86% 83% 83%\nBook Content2 Vec-res 89% 83% 71%\nBook Content2 Vec-embedpairs 90% 82% 71%\nModels trained on Movies dataset\n\nMovie ImageCNN specialized 59% 92% 60%\nMovie TextCNN 63% 90% 65%\nMovie Content2Vec-linear 64% 94% 65%\nMovie Content2Vec-crossfeat 62% 94% 63%\nMovie Content2 Vec-res 60% 95% 66%\nMovie Content2 Vec-embedpairs 64% 94% 65%\nTable 1: AUC results of image and text-based embeddings on hard cold-start dataset on Book, Movie\nand Mixed category test product pairs.\nTable 2: AUC results on non cold-start\ndataset."}, {"section_index": "11", "section_name": "6 CONCLUSIONS", "section_text": "This work has several key contributions. We show how to use all product signal for the task of prod\nuct recommendation using a modular architecture that can leverage fast evolving solutions for eac!\ntype of input modality. We define a set of requirements for evaluating the resulting product embed\ndings and show that our method leads to significant improvements over the single signal approache:\non hard recommendation situations such as cold-start and cross-category evaluation. Finally, in or\nder to model the joint aspects of the product embeddings we introduce a new type of learning unit\nnamed Pairwise Residual Unit and show the resulting gains on a real product co-purchases dataset.\nIn the current work we have addressed all but one of the desired requirements, namely generat\ning retrieval-optimized embeddings. For the next steps, we want to pursue sparse and compressec\nproduct representations, in order to help the performance of the final product retrieval system.\nDeepak Agarwal, Bee-Chung Chen, Pradheep Elango, and Raghu Ramakrishnan. Content recom-\nmendation on web portals. Communications of the ACM, 56(6):92-101, 2013.\nRecommendation Model | Test\nImageCNN 80%\nTextCNN 78%\nContent2vec-linear 88%\nContent2vec-res 89%\nContent2vec-embed_pairs 90%\nProd2vec 86%\nContent2vec-linear+ 89%\nContent2vec-res+ 92%\nContent2vec-embed_pairs+ | 92%\nTable 3: AUC results on soft cold-start\ndataset.\nMihajlo Grbovic, Vladan Radosavljevic, Nemanja Djuric, Narayan Bhamidipati, Jaikit Savla, Varui\nBhagwan, and Doug Sharp. E-commerce in your inbox: Product recommendations at scale\nIn Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discover:\nand Data Mining, KDD \u00b015, pp. 1809-1818, New York, NY, USA, 2015. ACM. ISBN 978\n\n1-4503-3664-2. doi: 10.1145/2783258.2788627. URL http://doi.acm.org/10.1145,\n2783258.2788627\nAditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. 2016.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-\nnition. arXiv preprint arXiv:1512.03385, 2015.\nRuining He and Julian McAuley. Vbpr: visual bayesian personalized ranking from implicit feed-\nback. arXiv preprint arXiv:1510.01784, 2015.\nChris Johnson. algorithmic music recommendations at spotify, 2015\nNoam Koenigstein, Nir Nice, Ulrich Paquet, and Nir Schleyen. The xbox recommender system. I\nProceedings of the sixth ACM conference on Recommender systems, pp. 281-284. ACM, 2012.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-\nlutional neural networks. In Advances in neural information processing systems, pp. 1097-1105,\n2012.\nPaul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations.\nIn Proceedines of the 1Oth ACM Conference on Recommender Svstems. nn. 191-198. ACM. 2016.\nAoritz Hardt and Tengyu Ma. Identity matters in deep learning. arXiv preprint arXiv:1611.04231\n2016.\nPo-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning\ndeep structured semantic models for web search using clickthrough data. In Proceedings of the\n22nd ACM international conference on Conference on information & knowledge management\npp. 2333-2338. ACM, 2013.\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen-\ntations of words and phrases and their compositionality. In Advances in neural information pro-\ncessing systems, pp. 3111-3119, 2013b.\nMichael J Pazzani and Daniel Billsus. Content-based recommendation systems. In The adaptiv.\nweb, pp. 325-341. Springer, 2007.\nYing Shan, T Ryan Hoens, Jian Jiao, Haijing Wang, Dong Yu, and JC Mao. Deep crossing: Web\nscale modeling without manually crafted combinatorial features. 2016.\nAaron Van den Oord, Sander Dieleman, and Benjamin Schrauwen. Deep content-based music rec\nommendation. In Advances in Neural Information Processing Systems, pp. 2643-2651. 2013.\nFlavian Vasile, Elena Smirnova, and Alexis Conneau. Meta-prod2vec-product embeddings usin;\nside-information for recommendation. arXiv preprint arXiv: 1607.07326,. 2016.\nDaixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. 2016\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep\nneural networks? In Advances in neural information processing systems, pp. 3320-3328, 2014.\nLilei Zheng, Khalid Idrissi, Christophe Garcia, Stefan Duffner, and Atilla Baskurt. Logistic simi-\nlarity metric learning for face verification. In 20/5 IEEE International Conference on Acoustics,\nSpeech and Signal Processing (ICASSP), pp. 1951-1955. IEEE, 2015.\nYelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr\u00e9goire Mesnil. Learning semantic rep-\nresentations using convolutional neural networks for web search. In Proceedings of the 23rd\nTnternational Conference on World Wide Web nn 3732-3274 ACM 2014\n\u201christian Szegedy, Vincent Vanhoucke, Sergey loffe, Jonathon Shlens, and Zbigniew Wojna. Re-\nthinking the inception architecture for computer vision. arXiv preprint arXiv: 1512.00567, 2015.\nAndreas Veit, Balazs Kovacs, Sean Bell, Julian McAuley, Kavita Bala, and Serge Belongie. Learning\nvisual clothing style with heterogeneous dyadic co-occurrences. In Proceedings of the IEEE\nInternational Conference on Computer Vision, pp. 4642-4650, 2015."}]
SyZprb5xg
[{"section_index": "0", "section_name": "ON ROBUST CONCEPTS AND SMALL NEURAL NETS", "section_text": "Amit Deshpande\nMicrosoft Research, Vigyan, 9 Lavelle Road, Bengaluru 560001, Indi\namitdesh@microsoft.com\nDepartment of Computer Science, The University of Texas at Austin,\n2317 Speedway, Stop D9500 Austin, TX 78712, USA"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The universal approximation theorem of ) and{Cybenko| ) provides a foun\n\ndation to the mathematical theory of artificial neural networks. It states that any continuous functior\non a compact subset of the Euclidean space can be approximated arbitrarily well by a feed-forwar\nartificial neural network with only one hidden layer containing finitely many neurons, under mil\nassumptions on the activation function. In such neural networks, each node applies an activatior\nfunction to a weighted linear combination of its inputs, and the above theorem holds true for man\ndifferent choices of activation functions as shown by However, the universal ap\nproximation theorem and its quantitative improvements by ) and others have certai\nlimitations, namely, they do not provide reasonable, practical bounds or efficient learning algo\nrithms for the parameters of these neural networks, that is, the number of neurons in the hidde:\nlayer and the size of weights used in the linear combinations. For a detailed survey of these result\nin approximation theory, we point the reader to/Pinkus](1999).\nIn practice, we notice that even moderate-sized neural networks can be trained to learn variou\nnatural concepts in computer vision tasks, and the typical rules of thumb followed for their mode\nand size selection are usually guided by the domain knowledge, the learning algorithm, and th\navailable computational resources more than any theoretical bounds; see [Simard et al.|(2003). Th\nknown theoretical bounds are either based on the Network Information Criterion (NIC) by |Amai\n(1998), which is a generalization of Akaike Information Criterion (AIC) by (1974) used i\nal inference, or based on the Vapnik-Chervonenkis dimension; see\n(1993), (1995), (1997). These bounds do not adequatel\n\nexplain the observed efficiency of learning many natural concepts in practice.\n*This work was done during an internship at Microsoft Research India, when the author was a student at\nChennai Mathematical Institute, H1, SIPCOT IT Park, Siruseri, Chennai 603103, India"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The universal approximation theorem for neural networks says that any reason-\nable function is well-approximated by a two-layer neural network with sigmoid\ngates but it does not provide good bounds on the number of hidden-layer nodes or\nthe weights. However, robust concepts often have small neural networks in prac-\ntice. We show an efficient analog of the universal approximation theorem on the\nboolean hypercube in this context.\nee eee en een nn nee ee ee eee I I I II IEE III II III IEE OJ OI II EI\n\nthe weights. However, robust concepts often have small neural networks in prac-\ntice. We show an efficient analog of the universal approximation theorem on the\nboolean hypercube in this context.\n\nWe prove that any noise-stable boolean function on n boolean-valued input vari-\nables can be well-approximated by a two-layer linear threshold circuit with a small\nnumber of hidden-layer nodes and small weights, that depend only on the noise-\nstability and approximation parameters, and are independent of n. We also give a\npolynomial time learning algorithm that outputs a small two-layer linear thresh-\nold circuit that approximates such a given function. We also show weaker gener-\nalizations of this to noise-stable polynomial threshold functions and noise-stable\nboolean functions in general.\nMost natural concepts are often based on a small number of relevant attributes or features, and can\nbe learnt efficiently once we implicitly map our input to the correct attribute space and focus on these\nrelevant attributes or features. Moreover, most natural concepts are also robust, that is, their positive\nand negative examples are reasonably unambiguous and far from each other. Thus, an important\ntheoretical question is to understand the underlying cognitive process, find a reasonably close and\naccurate model for it, and answer why certain models like artificial neural networks can mimic this\ncognitive process in practice.\nThe implicit mapping of our input coordinates to the space of attributes is formalized by the kernel\nmethod in machine learning; see Hofmann et al. (2008). Attribute-efficient learning proposed by\nValiant| (2000) and |Littlestone] (1988) captures the ease of learning via improved VC-dimension\nbounds that depend only a small number of relevant attributes. Robust concepts are often defined\nusing large-margin classifiers studied in the context of Support Vector Machines; see |Cortes &\n[Vapnik] (1995). We use a different notion of robustness suited to the boolean hypercube known\nas noise-stability. Due to known results from Fourier analysis over the boolean hypercube, noise-\nstability also implies closeness to a function that depends only on a small number of attributes.\nSince the universal approximation theorem gives a depth-2 neural network with only one hidde:\nlayer, the effect of depth on the power of neural networks has attracted considerable interest it\napproximation theory as well as boolean circuit complexity; see|de Villiers & Barnard] (1993) an\net al (1995). Note that on the boolean hypercube, depth-d circuits with sigmoid gates and linea\nthreshold gates are essentially equivalent. An important result relevant to our paper is due to a lon;\nline of work including [Goldmann et al. (1992), [Goldmann & Karpinski] (1998), and [Hofmeiste\nwhich proved that any depth-d linear threshold circuit with polynomially (in the number 7\nof input variables) many nodes but arbitrary weights can be efficiently simulated by a depth-(d + 1\nlinear threshold circuit with polynomially many nodes and polynomially bounded integer weights."}, {"section_index": "3", "section_name": "2 OUR RESULTS", "section_text": "We work with linear threshold circuits with boolean inputs and outputs, which are discrete analog\nof the neural networks with real-valued inputs and continuous activation functions. They are als\nknown as multi-layer perceptrons as in Minsky & Papert ( ), which are simply feed-forwar\nneural networks where each node computes a weighted linear combination of its inputs and applie\na threshold function for activation. As mentioned above, the notion of robustness we use is noise\nstability or low noise-sensitivity. The noise sensitivity of a boolean function is simply the fraction o\ninputs whose output changes, if we change each coordinate of the input independently with a smal\nprobability, say some \u20ac > 0.\nAs a warm-up, we show that if a boolean function defined on the boolean hypercube {\u20141, 1}\u201d i:\nnoise-stable, that is, if it has low noise-sensitivity, then it can be approximated by a depth-2 linea:\nthreshold circuit (that is, with one hidden layer), that depends only on constantly many variables ir\nthe input, and its number of hidden nodes and the weights are also constants, all independent of n\nHere we quantify approximation or closeness based on the fraction of inputs where two function:\ndiffer. This result may be folklore although we are not aware of any reference.\nTheorem 1. Any f : {\u20141,1}\" > {\u20141, 1} that has small noise-sensitivity for \u00abperturbations, tha\nis, NS. (f) = O (d\\/e), is 6-close to a depth-2 linear threshold circuit that depends only on O(1\nvariables of the input with O(1) hidden nodes and O(1) weights, where the constants O(1) depen\non \u20ac and 6 but are independent of n.\nTheorem 2. Any linear threshold function f : {\u20141,1}\" > {-1, 1} that has small noise-sensitivity\nfor e-perturbations, that is, NS. (f) = O (6\u00b0,/e), is 6-close to a depth-2 linear threshold circuit\nNhen the given function is actually a linear threshold function, that is, when it represents a halfs\nace, we can improve the above theorem with constants O(1) that are polynomial in 1/e and 1/6\nind thus, give an efficient analog of the universal approximation theorem for neural networks ove\nhe boolean hypercube. Note that this is consistent with the intuition that better noise-stable con\nepts can be approximated by smaller neural networks. It also shows that a given concept may bi\ninearly separable in a high n-dimensional kernel space but its approximation by neural network:\nmly depends on an inherent parameter like robustness or noise-sensitivity, independent of n.\nEquipped with this, we show the following implication for learning. Given oracle access to such a\nlinear threshold function f of low noise-sensitivity, we can learn a depth-2 linear threshold circuit\nthat approximates f well, in polynomial time.\nWe would also like to note that it is possible to extend our result for halfspaces to polynomial\nthreshold functions. This uses the facts that any degree-d polynomial threshold function e-close to\na J-junta, is close to junta that is a polynomial threshold function of degree at most d, and that the\nmachinery from|De et al.|(2014) extends to small weight polynomial threshold functions as well.\nWe now discuss some obstacles to possible improvements of our results.\nThe n\u201d running time is needed to identify the specific set of O (1/e* - log(1/e) - log(1/6)) relevan\ncoordinates. This n\u00b0(*) factor is unavoidable while learning k-juntas, and a candidate hard case i\n\npresented in {Blum et al] (1994). Only recently [Valiant] (2 gave an improved algorithm to lear\nk-juntas with noise rate 7) that runs in time less than n' poly (2\",1/(1 \u2014 2n)).\nWeak, proper, agnostic learning of halfspaces under non-uniform distributions is NP-hard as shown\n\nuruswami & Raghavendra| (2006), and extended to improper learning by [Daniely et al.|(2013}\nand (2015). Daniely\u2019s result rules out efficient, constant factor approximation for even\n\nimproper learning of halfspaces using any hypothesis class on the boolean hypercube under non-\nuniform distribution{!] However |Daniely|(2014) can get around this by giving a PTAS for impropet\nlearning of halfspaces on the unit sphere under uniform distribution. Our result can be seen as\nanother way to circumvent the hardness results. We learn noise-stable halfspaces on the boolean\nhypercube under uniform distribution, by giving an efficient, agnostic-type learning algorithm where\n\nthe output hypothesis is a depth-2 neural network. This is arguably more natural than other impropet\nlearning results for halfspaces via low-degree polynomials.\nNot having an efficient version of Bourgain\u2019s theorem for arbitrary noise-stable boolean functions,\nwhere the number of junta variables is polynomial in the noise-sensitivity parameters is another ob-\nstacle to efficient generalizations of our result. Note that the proof of this for noise-stable halfspaces\ndoes not generalize to higher depth linear threshold circuits. Another approach is to approximate\nany noise-stable function first using a halfspace and then by a depth-2 linear threshold circuit, but\n\nthis has been ruled out by|Mossel & Neeman with an example of a noise-stable function that\n\nis far from any halfspace.\nIn a recent paper, Fe have shown that sub-modular functions are \u20ac close\n\ntoO (1/e? - log (1/e))-juntas. Note that this tells us that we can e-approximate submodular func-\n\ntions by polynomials of degree O (1/e? - log (1/e)). This means we can approximate submodular\nfunctions by depth-3 neural networks with linear threshold gates everywhere except for the top gate.\nWe now give a brief outline of the proofs of the above theorems. proved that any\nfunction with small noise-sensitivity can be approximated by another function that is a junta, which\nmeans that it depends on very few coordinates. In Theorem[I] we show that such a function can also\nbe represented by a small depth-2 linear threshold circuit with small size and small integer weights.\nMoreover, any linear threshold function that is close to a junta is actually close to a linear threshold\nare under certain assumptions that are refuted in |Allen et al.\nslightly weaker but very similar result for halfspaces under differe:\n\n\u2018Results in |Daniely et al.\nHowever, (2015) reco\n\nsumptions\nfunction defined over those junta coordinates. Thus, we can approximate the given noise-stable\nfunction by a linear threshold function on a small number of inputs, however, its weights may be\nlarge. Therefore, we use the size-depth-weight trade-off from {Goldmann et al] (1992) to simulate\nthis linear threshold function by a depth-2 linear threshold circuit with small size as well as small\nweights in Theorem|2] We also use a recent improvement over Bourgain\u2019s theorem by|Diakonikolas\net al.| (2014) to get bounds polynomial in the noise-stability parameters. Theorem [3] follows by\ncombining a result of|De et al. 2014) on agnostic-type learning by a linear threshold function with\n\na constructive, efficient simulation of the|Goldmann et al. (1992) result by|Goldmann & Karpinski\n(1998)."}, {"section_index": "4", "section_name": "3. RELATED WORK", "section_text": "Motivated by the recent advances in neural networks, there have been various attempts to build <\ntheory to understand why neural networks can efficiently simulate many natural concepts and why\ntheir models and parameters can be learnt efficiently, for example,|Andoni et al. (2014) and |Arorz\n014). Our objective is to show efficient analogs of the universal approximation theoren\nfor neural networks, a question that has been studied in approximation theory as well as boolear\ncircuit complexity. We combine the size-depth-weight trade-off results from about two decade:\n\nago such as|Goldmann et al. and {Goldmann & Karpinskij (1998) with more recent work o1\n\nthe Fourier analysis of boolean functions and its corollaries in learning. Also note that There ar\u00a2\nknown NP-hardness results for learning halfspaces by (2009) and fo\napproximately learning depth-2 threshold circuits by |Bartlett & Ben-David] (2002). However, thes\u00a2\nare for arbitrary threshold circuits. As we will show, the noise-stability constraint allows us to get <\npolynomial time algorithm to learn a depth-2 threshold circuit approximating the original function.\nshowed that robust or large-margin halfspaces in R\u201d can be learnt effi\nciently using random projections. Their learning algorithm outputs a depth-2 neural network witl\ndifferent activation functions in different layers. We define robustness using noise-stability instead\nand show that better noise-stability reduces learning complexity. Our results also generalize t\npolynomial threshold functions, that is, a noise-stable polynomial threshold function (PTF) can b\nrepresented by a small, depth-2 neural network."}, {"section_index": "5", "section_name": "4 PRELIMINARIES", "section_text": "Here we give a compilation of definitions and known results that we will use to prove Theorems [I]\nand|3} Noise-stable boolean functions have low noise-sensitivity. Noise-sensitivity of a boolean\n\nunction, with respect to e-perturbations, is defined as the fraction of inputs whose output changes.\nwhen we change each bit of the input independently with a small probability \u00ab.\nDefinition 1. The noise sensitivity of a boolean function f : {\u20141,1}\" \u2014 {1,1} ata given noise\nrate \u20ac > 0 is defined as\nNS, (f) = Prob, (f(x) fly)).\nwhere x is uniformly distributed in {\u20141,1}\", and y is obtained from x by flipping each bit of 1\nindependently with probability e.\nA theorem of [Bourgain| (2002) states that boolean functions with small noise-sensitivity are close t\njuntas, which are boolean functions that depend on very few coordinates. Note that the number o\nthese relevant coordinates is independent of n.\nLemma 1. Any f : {\u20141,1}\" > {\u20141,1} that satisfies NS, (f) = O (dv) is 6-close to a k-jun\n\nyohore\nO(1/e)\nbe\nThe low effective-dimension of hyperparameters has been observed and exploited to learn using\n\nneural networks in practice by|Bergstra & Bengio|(2012). We propose noise-stability as an approach\n\nto study this theoretically.\nHere, 6-closeness means agreement on 1 \u2014 6 fraction of the inputs\nf(x) =sgn (S7P_, wit; \u2014 Wo),\nLemma 2. Any linear threshold function f : {\u20141,1}\" \u2014 {\u20141,1} satisfies NS, (f) < 2,/e.\nThe bounds in Proposition|I]can be improved when f is a linear threshold function as shown by the\nresult of| Diakonieetes eral (2014) mentioned below. Thus, a noise-stable linear threshold functior\nis close to a k-junta, where k is polynomial dependent on the noise and approximation parameters\nbut is independent of n.\nThe following lemma from|{O\u2019Donnell & Servedio| (2011) ties it up nicely to say that if any linea\nthreshold function is close to a junta, then it must be close to a linear threshold function defined over\nthose junta coordinates.\nLemma 4. [fa linear threshold function f : {\u20141,1}\" \u2014 {\u20141, 1} is 6-close to a junta over a subse\nJ C [nl] of coordinates, then f is 5-close to a linear threshold function defined over that subse\nJ C Jn) of coordinates.\nLinear threshold circuits where each gate computes a linear threshold function forms an importan\nclass in circuit complexity. We borrow the standard definitions and notation from|Siu et al.| ([995\n\nand|Goldmann et al.\nDefinition 3. LT, is defined as the class of linear threshold circuits of depth d on n inputs with the\nnumber of nodes polynomial in n but arbitrary weights inside the linear threshold functions. LT \u2018a is\ndefined as the class of linear threshold circuit of depth d on n inputs with both the number of nodes\nand weights inside the linear threshold functions polynomially bounded in n.\nThe size-depth-weight trade-offs for linear threshold circuits have been studied in circuit complexity\n\nwith keen interest, and a long line of work culminated in the following result by Goldmann et al.\n(1992). Here, the weight bounds are bounds on the ratio of the maximum and the minimum weights,\nwhen all of them are integers.\nThis means that any depth-d linear threshold circuit of polynomial size but arbitrary weights can be\nsimulated by a depth-(d + 1) linear threshold circuit whose size and weights are both polynomially\n\nbounded. While|Goldmann et al.| (1992) gives an existence result,|Goldmann & Karpinski (1998|\n\ngives a constructive proof and it is easy to check that the underlying simulation is efficient and car\n\nbe computed in polynomial time as well. (1996) has a simplified proof of[Goldmann &\nKarpinski| (1998) with improved explicit bounds.\nNote that the \\/e in the bound has a special significance for linear threshold functions, as we explain\n\nhelau\nA theorem of\nrate \u20ac is at most 2\n\nstates that the noise sensitivity of any linear threshold function at noise\nRemark: For convenience, we use NS, (f) = O (6\u00b0/e) in our assumption whenever using the\nabove theorem.\nBourgain\u2019s theorem has also been extended to the case of boolean functions with inputs that come\nfrom constant biased distributions over {\u20141, 1}\u201d in[Kindler & Safial(2002 Our general result can\nbe extended to these cases as well. For this we need to define the \\-noise-sensitivity of a boolean\nfunction with respect to 1,, where 1, is the distribution that picks \u20141 with probability p and 1 with\nprobability 1 \u2014 p.\nDefinition 4. The A-noise-sensitivity of a Boolean funciton f : {\u20141,1}\" \u2014 {\u20141,1} with respec\nto Lt, is defined as\nNS)\u00bb (f) = Probe y (f(x) # f(y))\nwhere x ~ yy, and y is constructed by first sampling coordinates I from {n| according to WX and\nthen replacing those coordinates in x by coordinates independently sampled from wl.\nLemma 6. For any parameter ) > 0, fix k = log;\u2014y(1/2). Then every Boolean function\nfs {-1,1}\" > {-1,1} whose A-noise-sensitivity with respect to 1 is bounded by (\u20ac/k)?, is\na max|O(elog(1/p)/p?), J)-junta, where\nk3\n\nep\n\na)\nLemma 7. Any f : {\u20141,1}\" > {\u20141,1} that is a k-junta can be represented by a depth-2 lineai\n\u2018threshold circuit with the number of nodes and weights bounded by 2\u00b0).\nProof. Since f is a k-junta we can pretend that f : {\u20141,1}* \u2014 {\u20141, 1}. Each positive example x \u00a2\n{\u20141,1}* such that f(a) = 1 can be isolated by a single halfspace h(y) = sgn ((a, y) \u2014 (k \u2014 1/2))\nwhich outputs positive value for y \u20ac {\u20141,1}* iff 2 = y. We can build a depth-2 linear thresholc\ncircuit where all the hidden nodes correspond to h(a), one for each positive examples of f. Thus\nfor a positive example of f, exactly one of the hidden layer node outputs 1. Otherwise, all hidder\nlayer nodes output \u20141. Now we can have a linear threshold gate are the top with all weights 1 anc\nthreshold 1 \u2014 p, where p is the number of positive examples of f. Note that all the hidden thresholc\ngates have integer weights bounded by & and they are at most 2\u201d in number. The top gate has intege\nweights bounded by 2\". Thus, f can be represented by an LT\u00bb or depth-2 linear threshold circui\nwhere the size of the circuit and the integer weights used in it are bounded by 20(*). C\nTherefore, combining this with Proposition|1| we get that any noise-stable f as required in Theorem\n[I] is 6-close to a depth-2 linear threshold circuit whose size and integer weights are bounded by\n20(k) where\nO(1/e)\nbe\nindependent of n.\nSince Bourgain\u2019s theorem can be improved for linear threshold functions with polynomial depen:\ndency in the noise and approximation parameters, we can approximate the given function using \u00a2\njunta where the number of junta variables is polynomially bounded. Due to Lemmaf4} we can more:\nover, say that our function is not just close to a junta but close to a linear threshold function definec\nover these junta variables. The only caveat is that the weights used in this linear threshold functior\nmay be large. This is where we invoke size-depth-weight trade-off result such as Proposition|5|fror\ncircuit complexity to simulate this linear threshold function by a linear threshold circuit with ar\nextra depth but polynomially bounded weights.\nProof. (Proof of Theorem|2) From Ones we see that any linear threshold function f with\nlow noise-sensitivity NS, hn V\u00e9) is d-close to an O (1/e? log (1/e) log (1/6)) -junta.\nFrom Lemma/4] moreover, it must ral \u2018. close a linear threshold function over these junta variables.\nThus, f is 6-close to an LT\\ function over these junta variables but the weights could be large. How-\never, Proposition[5] shows that this can be simulated by an LT\u00bb function over these junta variables\nwith weights polynomially bounded in the number of junta variables. Therefore, f is d-close to\nan LT\u00bb function over O (1/e? log (1/e) log (1/4)) variables with the size of the circuits and the\nweights at the threshold gates polynomially bounded in 1/e and 1/5, but independent of n. This\nconcludes the proof of Theorem[2]\nProof. (Proof of Theorem[3) Looking at Theorem|2| the broad outline of the algorithm is as follows\nAs seen in the proof of Theorem|2| we know that the given linear threshold function of low noise\nsensitivity is close to another linear threshold function that depends only on a small, constant numbe\nof input variables. We can go over each small subset by brute force. Now over each small subset\nwe can try to learn a linear threshold function over them that is closest to the given function. Her\nwe use a result from|De e (see Theorem 36 of|De et al.](2014)) on agnostic-type learnin\nhalfspaces via reconstructing the Chow parameters of a linear threshold function; Chow parameter\nare the level-0 and level-1 Fourier coefficients which are known to completely determine a linea\nthreshold function.\nLemma 8. Let f : {\u20141,1}\" > {\u20141,1} and let opt be the minimum disagreement (in fraction \u00ab\nthe inputs) of f with its closest linear threshold function. Then given any 0 < \u20ac, < 1/2 and acces\nto independent uniform samples (x, f(a)), we can output a linear threshold function g (given by it\nweights) such that, with probability 1 \u2014 74,\nwhere the algorithm runs in time\nAn immediate corollary that is useful to us is\nCorollary 1. Let f : {\u20141,1}\" > {-1, 1} bea boolean function that is 5-close to a linear thresholc\nfunction in a given subset S C [n] Cs input variables. Then, for 0 < 6, < 1/2, and given acces.\nto independent uniform examples (x, f (x)), we can output a linear threshold function g (given by\nits weights) such that, with probability 1-4,\nd(f.g) < 2-H Vlog(1/9)) 4. \u00a7\nThus, we go over all subsets of size O (1/e? - log(1/e) - log(1/6)) and run the agnostic-type learn-\ning of linear threshold functions by 2014). We take the best of these and convert the\ncorresponding output, which is a linear threshold function with weights possibly exponential in 1/e\n\nand 1/6, and apply|Goldmann & Karpinski|(1998) to convert it into a depth-2 linear threshold circuit\nboth are polynomi\n\nwhose size and weights ally bounded in 1/\u00a2 and 1/6.\nd(f.g) < 9-2 Vlog /opt) 4. \u00a2\nx5 1 O(log* (1/6)) 1\nO(k: = log {| \u2014 }.\nw) (5) \u00ab(5)"}, {"section_index": "6", "section_name": "3 ~CONCLUSION AND FUTURE WORK", "section_text": "We show an efficient analog of the universal approximation theorem for neural networks in the case\nof noise-sensitive halfspaces of boolean hypercube, and gave efficient learning algorithms for the\nsame. We do this via an interplay of techniques from Fourier analysis over the boolean hypercube\nand size-weight-depth trade-off results on linear threshold circuits from circuit complexity.\nOne might be able to extend these result to continuous domains where the input is sampled uniformly\nfrom [\u20141, 1]\" by using the ANOVA (analysis of variance) decomposition of a function. However,\nto do this one will have to prove a Bourgain-type theorem for these settings."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Sarah R. Allen, Ryan O\u2019Donnell, and David Witmer. How to refute a random CSP. CoRR\nabs/1505.04383, 2015. URL/http: //arxiv.org/abs/1505. 04383)\nRosa I. Arriaga and Santosh Vempala. An algorithmic theory of learning: Robust concepts and\nrandom projection. Machine Learning, 63(2):161-182, 2006. ISSN 1573-0565. doi: 10.1007/\n\ns10994-006-6265-7. URL http: //dx.doi.org/10.1007/s10994-006-6265-7\nPeter L. Bartlett and Shai Ben-David. Hardness results for neural network approximation problems.\nTheoretical Computer Science, 284(1):53 \u2014 66, 2002. ISSN 0304-3975. doi: http://dx.doi.org/\n\n10.1016/S0304-3975(01)00057-3. URL http: //www.sciencedirect.com/science/\nlarticle/pii/S0304397501000573, Computing Learining Theory.\nEric B. Baum and David Haussler. What size net gives valid generalization? Neural Comput.\n1(1):151-160, March 1989. ISSN 0899-7667. doi: 10.1162/neco.1989.1.1.151. URL|http\n\n//Aax.doi.org/10.1162/neco.1989.1.1.151\nJames Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. J. Mach.\n\nLearn. Res., 13:281-305, February 2012. ISSN 1532-4435. URL\ncitation.cfm?i 188385.2188395\nAvrim Blum, Merrick L. Furst, Michael J. Kearns, and Richard J. Lipton. Cryptographic primitives\nbased on hard learning problems. In Proceedings of the 13th Annual International Cryptology\nConference on Advances in Cryptology, CRYPTO \u00b093, pp. 278-291, London, UK, UK, 1994.\n\nSpringer-Verlag. ISBN 3-540-57766-1. URL/ht tp: //al -acm.org/citation.cfm?id=\n646758.759585\nH. Akaike. A new look at the statistical model identification. Automatic Control, IEEE Transactions\non, 19(6):716-723, Dec 1974. ISSN 0018-9286. doi: 10.1109/TAC.1974.1100705.\nShun-ichi Amari. The handbook of brain theory and neural networks. chapter Learning and Sta-\ntistical Inference, pp. 522-526. MIT Press, Cambridge, MA, USA, 1998. ISBN 0-262-51102-9.\nURLIhttp: //dl.acm.org/citation. cfm?id=303568. 303829)\nAlexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. Learning polynomials with\nneural networks. In Proceedings of the 31th International Conference on Machine Learning,\nICML 2014, Beijing, China, 21-26 June 2014, pp. 1908-1916, 2014. URL http://jmir.|\norg/proceedings/papers/v32/andonil4.html.\nSanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some\ndeep representations. In Proceedings of the 31th International Conference on Machine Learning,\n\nICML 2014, Beijing, China, 21-26 June 2014, pp. 584-592, 2014. URL\nproceedings/papers/v32/aroral4.html.\nPeter L. Bartlett. Vapnik-chervonenkis dimension bounds for two- and three-layer networks. Neural\nComputation, 5(3):371-373, 1993. doi: 10.1162/neco.1993.5.3.371. URL http: //dx.doi.\norg/10.1162/neco.1993.5.3.371\nGeorge Cybenko. Approximation by superpositions of a sigmoidal function. MCSS, 5(4):455, 1992\ndoi: 10.1007/BF02134016. URL http: //dx.doi.org/10.1007/BF02134016\nAnindya De, Ilias Diakonikolas, Vitaly Feldman, and Rocco A. Servedio. Nearly optimal solu-\ntions for the chow parameters problem and low-weight approximation of halfspaces. J. ACM,\n61(2):11:1-11:36, 2014. doi: 10.1145/2590772. URL htt /doi.acm.org/1\n2590772\n\nJ. de Villiers and E. Barnard. Backpropagation neural nets with one and two hidden layers. Neural\nNetworks, IEEE Transactions on, 4(1):136-141, Jan 1993. ISSN 1045-9227. doi: 10.1109/72.\n182704.\n\nI. Diakonikolas, R. Jaiswal, R. A. Servedio, L.-Y. Tan, and A. Wan. Noise Stable Halfspaces are\nClose to Very Small Juntas. November 2014.\n\nVitaly Feldman and Jan Vondrak. Optimal bounds on approximation of submodular and xos func-\ntions by juntas. In Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of\nComputer Science, FOCS \u201913, pp. 227-236, Washington, DC, USA, 2013. IEEE Computer So-\n\nciety. ISBN 978-0-7695-5135-7. doi: 10.1109/FOCS.2013.32. URL\n10.1109/FOCS.2013. 32|\nThomas Hofmann, Bernhard Schlkopf, and Alexander J. Smola. Kernel methods in machine learn-\ning. Ann. Statist., 36(3):1171-1220, 06 2008. doi: 10.1214/009053607000000677. URL\n\nhttpo://dx.doi.org/10.1214/009053607000000677\nMLGEMEIMALECS, LIL.LUA~LIV, LUUL. GUI. LULUU TIDE UALOIJOUL.\n\nCorinna Cortes and Vladimir Vapnik. Support-vector networks. Mach. Learn., 20(3):273-297,\nSeptember 1995. ISSN 0885-6125. doi: 10.1023/A:1022627411411. URL http: //dx.doi.\norg/10.1023/A:1022627411411)\nAmit Daniely. A PTAS for agnostically learning halfspaces. CoRR, abs/1410.7050, 2014. URL\n\nhttp: //arxiv.org/abs/1410.7050|\nThomas Hofmeister. Computing and Combinatorics: Second Annual International Conference,\nCOCOON \u201996 Hong Kong, June 17-19, 1996 Proceedings, chapter A note on the simulation\nof exponential threshold weights, pp. 136-141. Springer Berlin Heidelberg, Berlin, Heidelberg,\n\n1996. ISBN 978-3-540-68461-9. doi: 10.1007/3-540-61332-3_146. URL|http: //dx.doi.|\n\nlora/10.1007/3-540-61332-3 146!\nGuy Kindler and Shmuel Safra. Noise-resistant boolean functions are juntas. preprint, 2002.\nMarek Karpinski and Angus Macintyre. Polynomial bounds for {VC} dimension of sigmoidal and\n\ngeneral pfaffian neural networks. Journal of Computer and System Sciences, 54(1):169 \u2014 176,\n1997. ISSN 0022-0000. doi: http://dx.doi.org/10.1006/jcss.1997.1477. URL http://www.\nsciencedirect.com/science/article/pii/S002200009791477X\nMarvin Minsky and Seymour Papert. Perceptrons - an introduction to computational geometry. MIT\nPress, 1987. ISBN 978-0-262-63111-2."}]
ryWKREqxx
[{"section_index": "0", "section_name": "EMERGENT PREDICATION STRUCTURE IN VECTOR\nREPRESENTATIONS OF NEURAL READERS", "section_text": "Jai Wang* Takeshi Onishi* Kevin Gimpel David McAllester\njhaiwang,tonishi,kgimpel,mcallester;@ttic.edu"}, {"section_index": "1", "section_name": "| INTRODUCTION AND OVERVIEW", "section_text": "Reading comprehension is a type of question answering task where the answer is to be found in a\npassage about particular entities and events not otherwise familiar to the reader. In particular, the\nentities and events should not be mentioned in structured databases of general knowledge. Reading\ncomprehension problems are intended to measure a systems ability to extract semantic information\nabout entities and relations directly from unstructured text. Several large scale reading comprehen-\nsion datasets have been introduced recently. In particular the CNN & DailyMail datasets (Hermann\n\nthe Children\u2019s Book Test (CBT) (Hill et al.}[2016), and the Who-did-What dataset (On-\n6). The large sizes of these datasets enable the application of deep learning. These\nare all cloze-style datasets where a question is constructed by deleting a word or phrase from an\narticle summary (in CNN/DailyMail), from a sentence in a Children\u2019s story (in CBT), or by delet-\ning a person from the first sentence of a different news article on the same entities and events (in\nWho-did-What).\nIn this paper we present empirical evidence for the emergence of predication structure in a certai\nclass of neural readers. To understand predication structure is it helful to review the anonymizatio\nperformed in the CNN/DailyMail dataset. In this dataset named entities are replaced by anonymou\nentity identifiers such as \u201centity37\u201d. The passage might contain \u201centity52 gave entity24 a rousin\napplause\u201d and the question might be \u201cX received a rounding applause from entity52\u201d. The tas\nis to fill in X from a given multiple choice list of candidate entity identifiers. A fixed relativel\nsmall set of the same entity identifiers are used over all the problems and the same problem 1\npresented many times with the entity identifiers shuffled. This prevents a given entity identifier fror\nhaving any semantically meaningful vector embedding. The embeddings of the entity identifiers ar"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Reading comprehension is a question answering task where the answer is to be\nfound in a given passage about entities and events not mentioned in general know1-\nedge sources. A significant number of neural architectures for this task (neural\nreaders) have recently been developed and evaluated on large cloze-style datasets.\nWe present experiments supporting the emergence of \u201cpredication structure\u201d in\nthe hidden state vectors of a class of neural readers including the Attentive Reader\nand Stanford Reader. We posits that the hidden state vectors can be viewed as (a\nrepresentation of) a concatenation [P, c] of a \u201cpredicate vector\u201d P and a \u201cconstant\nsymbol vector\u201d c and that the hidden state represents the atomic formula P(c).\nThis predication structure plays a conceptual role in relating \u201caggregation read-\ners\u201d such as the Attentive Reader and the Stanford Reader to \u201cexplicit reference\nreaders\u201d such as the Attention-Sum Reader, the Gated-Attention Reader and the\nAttention-over-Attention Reader. In an independent contribution, we show that\nthe addition of linguistics features to the input to existing neural readers signifi-\ncantly \u2018Ta performance yielding the best results to date on the Who-did-What\ndataset\npresumably just pointers to semantics-free tokens. We will write entity identifiers as logical constant\nsymbols such as c rather than strings such as \u201centity37\u201d.\nAggregation readers, including Memory Networks (Weston et al.{|Sukhbaatar et al. 2015), the At-\ntentive Reader and the Stanford Reader (Chen et al.]/2016), use bidirectional\nLSTMs or GRUs to construct a contextual embedding h; of each position \u00a2 in the passage and alsc\nan embedding g of the question. They then select and answer c using a criterion similar to\nargmax <hi,g> <hi,e(c) >\ngn a to +, e(c)\nWe argue that for aggregation readers, roughly defined by (a the hidden state h, of the passage\nat position (or word) t can be viewed as a vector concatenation h; = [e(\u00ae,), e\u2019(c,)] where \u00ae; is\na property (or statement or predicate) being stated of a particular constant symbol c;. A logician\nmight write this as h; = \u00a9\u00ae,[c,]. Furthermore, the question can be interpreted as having the form\nW(x] where the problem is to find a constant symbol c such that the passage implies W[c]. Assuming\nh, \u2014 le(\u00ae,). e\u2019(e,)] and g \u2014 Je(W).0) and e(c) \u2014 [0. e\u2019(c)] we can rewrite (1) as\nargmax Ss < e(\u00ae),e(U) > <e'(q),e(c) >.\nWe also consider a second class of neural readers that we call explicit reference readers. Explici\nreference readers avoid (2) and instead use\nSo far we have only considered anonymized datasets that require the handling of semantics-free\nconstant symbols. However, even for non-anonymized datasets such as Who-Did-What, it is helpful\nto add features which indicate which positions in the passage are referring to which candidate an-\nswers. This indicates, not surprisingly, that reference is important in question answering. The fact\nthat explicit reference features are needed in aggregation readers on non-anonymized data indicates\nthat reference is not being solved by the aggregation readers. However, as reference seems to be\nimportant for cloze-style question answering, these problems may ultimately provide training data\nfrom which reference resolution can be learned.\nSections [2] and 3] review various existing datasets and models respectively. Section |4] presents the\nlogical structure interpretation of aggregation readers in more detail and the empirical evidence\nsupporting it. Section [5] proposes new models that enforce the direct sum structure of the hidden\nwhere e(c) is the vector embedding of the constant symbol (entity identifier) c. In practice the\ninner-product < h;,q > is normalized over \u00a2 using a softmax to yield an attention a; over \u00a2 and\nbecomes.\nargmax < e(c), Ss ah, >.\nc 7\nThe first inner product in (3) is interpreted as measuring the extent to which \u00ae;[2] implies [2] for\nany x. The second inner product is interpreted as restricting t to positions talking about the constant\nsymbol c.\nNote that the posited decomposition of h, is not explicit in but instead must emerge during\ntraining. We present empirical evidence that this structure does emerge. The empirical evidence is\nsomewhat tricky as the direct sum structure that divides h; into its two parts need not be axis aligned\nand therefore need not literally correspond to vector concatenation.\nargmax y art\n\n\u00a9 tER(c)\nwhere R(c) is the subset of the positions where the constant symbol (entity identifier) c occurs.\nNote that if we identify a, with < e(\u00ae,),e(V) > and assume that < e\u2019(c), e\u2019(c,) > is either 0 or 1\ndepending on whether c = c\u00a2;, then (3) and (4) agree. In explicit reference readers the hidden state\nhy need not carry a pointer to c; as the restriction on t is independent of learned representations. Ex-\nplicit reference readers include the Attention Sum Reader (Kadlec et al.| 2016), the Gated Attention\nReader (DFingra etal) BOTS). the Attention-over-Attention Reader (Cut et al.| |2016) and others (a\n\nlist can be found in section|6!\nstate vectors. It is shown that these new models perform well on the Who-did-What dataset provided\nthat reference annotations are added as input features. Section|5|also describes additional linguistic\nfeatures that can be added to the input embeddings and show that these improve the performance\nof existing models resulting in the best single-model performance to date on the Who-did-What\ndataset.\nBefore presenting various models for machine comprehension we give a general formulation of th\nmachine comprehension task. We take an instance of the task be a four tuple (gq, p,a, A), wher\nq is a question given as sequence of words containing a special taken for a \u201cblank\u201d to be filled ir\np is a document consisting of a sequence of words, A is a set of possible answers and a \u20ac A\u2018\nthe ground truth answer. All words are drawn from a vocabulary V. We assume that all possibl\nanswers are words from the vocabulary, that is A C YV, and that the ground truth answer appears i\nthe document, that is a \u20ac p. The problem can be described as that of selecting the answer a \u20ac .\nthat answers question q based on information from p.\nWe will now briefly summarize important features of the related datasets in reading comprehension.\nCNN & DailyMail: ) constructed these datasets from a large number of news\narticles from the CNN and Daily Mail news websites. The main article is used as the context,\nwhile the cloze style question is formed from one short highlight sentence appearing in conjunction\nwith the published article. To avoid the model using external world knowledge when answering\nthe question, the named entities in the entire dataset were replaced by anonymous entity IDs which\nwere then further shuffled for each example. This forces models to rely on the context document to\nanswer each question. In this anonymized corpus the entity identifiers are taken to be a part of the\nvocabulary and the answer set A consists of the entity identifiers occurring in the passage.\nWho-did-What (WDW): The Who-did-What dataset contains 127,000 mul-\ntiple choice cloze questions constructed from the LDC English Gigaword newswire corpus (Davic\n. In contrast with CNN and Daily Mail, it avoids using article summaries for ques-\ntion formation. Instead, each problem is formed from two independent articles: one is given as the\npassage to be read and a different article on the same entities and events is used to form the ques-\ntion. Further, Who-did-What avoids anonymization, as each choice is a person named entity. In this\ndataset the answer set A consists of the person named entities occurring in the passage. Finally, the\nproblems have been filtered to remove a fraction that are easily solved by simple baselines. It has\ntwo training sets. The larger training set (\u201crelaxed\u201d) is created using less baseline filtering, while the\nsmaller training set (\u201c\u2018strict\u2019\u201d) uses the same filtering as the validation and test sets.\nChildren\u2019s Book Test (CBT) (2016) developed the CBT dataset in a slightly different\nfashion to the CNN/DailyMail datasets. They take any sequence of 21 consecutive sentences from\na children\u2019s book: the first 20 sentences are used as the passage, and the goal is to infer a missing\nword in the 21st sentence. The task complexity varies with the type of the omitted word (verb,\npreposition, named entity, or common noun). According to the original study on this dataset\n(2016), n-gram and recurrent neural network language models are sufficient for predicting\nverbs or prepositions. However, for named entities and common nouns, current solvers are still far\nfrom human performance.\nOther Related Datasets. It is also worth mentioning several related datasets. The MCTest\ndataset (Richardson et al|| consists of children\u2019s stories and questions written by crowdsourced\nworkers. The dataset only contains 660 documents and is too small to train deep models. The bAbI\ndataset (Weston et al.| {2016) is constructed automatically using synthetic text generation and can\nbe perfectly answered by hand-written algorithms 2016). The SQuAD dataset (\n\n{jpurkar et al.||2016) consists passage-question pairs where the passag is a wikipedia article and the\nquestions are written by crowdsourced workers. Although crowdsourcing is involved, the dataset\ncontains over 200,000 problems. But the answer is often a word sequence which is dificult to handle\nwith the reader models considered here. The LAMBADA dataset is a word\nprediction dataset which requires a broad discourse context and the correct answer might not in\nthe context. Nonetheless, when the correct answer is in the context, neural readers can be applied\n\neffectively (Chu et al.|"}, {"section_index": "3", "section_name": "AGGREGATION READERS AND EXPLICIT REFERENCE READERS", "section_text": "Here we classify readers into aggregation readers and explicit reference readers. Aggregation readers\n\nappeared first in the literature and include Memory Networks (Weston et al.| Sukhbaatar et al.\n(2015), the Attentive Reader (2015), and the a\nAggregation readers are defined by equations (8) and (0) below. Explicit reference readers incluce\nthe Attention-Sum Reader (Kadlec et al.| 2016), the Gated-Attention Reader (Dhingra et al.|{2016),\nand the Attention-over-Attention Reader (Cui et al.| Explicit reference readers are defined by\nequation below. We first present the Stanford Reader as a paradigmatic aggregation Reader and\nthe Attention-Sum Reader as a paradigmatic explicit reference reader."}, {"section_index": "4", "section_name": "3.1 AGGREGATION READERS", "section_text": "Stanford Reader. The the Stanford Reader (Chen et al.||2016) computes a bi-directional LSTN\nrepresentation of both the passage and the question.\nIn equations and we have that e(p) is the sequence of word embeddings e(w;) for w; \u20ac p\nand similarly for e(q). The expression biLSTM(s) denotes the sequence of hidden state vectors\nresulting from running a bi-directional LSTM on the vector sequence s. We write biLSTM(s); for\nthe ith vector in this sequence. Similarly f{LSTM(s) and bLSTM(s) denote the sequence of vectors\nresulting from running a forward LSTM and a backward LSTM respectively and |-, -] denotes vector\nconcatenation. The Stanford Reader, and various other readers, then compute a bilinear attention\nover the passage which is then used to construct a single weighted vector representation of the\nnassace.\nat\n\nsoftmax al Wo\n\nSo ahs\nt\nHere e,(a) is an \u201coutput embedding\u201d of the answer a. On the CNN dataset the Stanford Reader trains\nan output embedding for each the roughly 500 entity identifiers used in the dataset. In cases where\nthe answer might be anv word in Y an output embedding must be trained for the entire vocabularv.\nThe reader is trained with log-loss In 1/P(a|p, q, A) where a is the correct answer. At test time th\nreader is scored on the percentage of problems where @ = a.\nMemory Networks. Memory Networks (Weston et al.} Sukhbaatar et al.| 2015) use (8) and but\n\nhave more elaborate methods of constructing \u201cmemory vectors\u201d h; not involve LSTMs. Memory\nnetworks use (8) and but replace (9) with\nP(w|p,q, A) = P(wlp,q) = softmax eo(w)To.\nwe\nP(w\\p,q, A) = P(w|p,q) = softmax e,(w)7o\nIt should be noted that (11) trains output vectors over the whole vocabulary rather than just those\nitems occurring in the choice set A. This is empirically significant in non-anonymized datasets such\nas CBT and Who-did-What where choices at test time may never have occurred as choices in the\ntraining data.\nAttentive Reader. The Stanford Reader was derived from the Attentive Reader (Hermann et al.\n2015). The Attentive Reader uses a; = softmax;, MLP((h:, q]) instead of (7p. Here MLP(z) is the\noutput of a multi layer perceptron (MLP) given input x. Also, the answer distribution in the attentive\nreader is defined over the full vocabulary rather than just the candidate answer set A.\nP(w|p,q,A) = P(w|p, q) = softmax eo(w)\u201d MLP((o, q])\nbiLSTM(e(p))\n[fLSTM(e(q))jq, DLSTM(e(q))1]\np(a|d, q, A)\n\nsoftmax eo(a)!\nsoftmax \u20aco(a)' 0\n\nargmax \u20ac,(a)'o\nacA\nEquation (1 ) in that it leads to the training of output vectors for the full vocabulary\nrather than just those items appearing in choice sets in the training data. As in memory networks\nthis leads to improved performance on non-anonymized data sets.\nHere we think of R(a, p) as the set of references to a in the passage p. It is important to note that\nOpis an equality and that P(a|p, g, A) is not normalized to the members of R(a,p). When training\nwith the log-loss objective this drives the attention a, to be normalized \u2014 to have support only on\nthe positions t with t \u20ac R(a, p) for some a. See the heat maps in the appendix.\nsated-Attention Reader. The Gated Attention Reader |Dhingra et al.|(2016) involves a K-laye\n\u00a5iGRU architecture defined by the following equations.\n[fGRU(e(q))jq|, b(GRU(e(q)):] 1S < Kk\nbiGRU(e(p))\nbIGRU(h\u2019-! @g\u2019!) 2<\u00a2< K\n[GRU (e(q))jq, KGRU(e(q))1] 1< fsb\nbiGRU(e(p))\nbiGRU(h\u2019! \u00a9 q\u2019\"!) 2<0< K\nHere the question embeddings gq\u2019 for different values of \u00a3 are computed with different GRU model\nparameters. Here h \u00a9 q abbreviates the sequence h; \u00a9 q, h2 Og, ... hip, \u00a9 g. Note that for kK = 1\nwe have only q! and h! as in the attention-sum reader. An attention is then computed over the final\nlayer h* with a; = softmax, (h/)' q* in the attention-sum reader. This reader uses\nAttention-over-Attention Reader, The Attention-over-Attention Reader (Cui et al.|\nmore elaborate method to compute the attention a,. We will use \u00a2 to range over positions in the\n\npassage and j to range over positions in the question. The model is then defined by the following\nequations.\nnh = biGRU(e(p)) q = biGRU(e(q))\nby = i ye Bes a = D0; Bj oe,j\nNote that the final equation defining a, can be interpreted as applying the attention {; to the atten-\ntions a, ;. This reader uses and"}, {"section_index": "5", "section_name": "| EMERGENT PREDICATION STRUCTURE", "section_text": "As discussed in the introduction the entity identifiers such as \u201centity37\u201d introduced in the\nCNN/DailyMail dataset cannot be assigned any semantics other than their identity. We should think\nof them as pointers or semantics-free constant symbols. Despite this undermining of semantics\naggregation readers using (8) and (10) are able to perform well. Here we posit that this is due to ar\nemergent predication structure in the hidden vectors h;. Intuitively we want to think of the hidder\nstate vector h, as a concatenation [e(\u00ae,), e/,(a;,)] where \u00a9, carries semantic information true of a,\nWe think of h; as representing \u00ae,/a;] for semantic statement \u00ae,/2] asserted of the constant symbo:\nAttention-Sum Reader. In the Attention-Sum Reader ) h and q are computed\nwith equations and (6) as in the Stanford Reader but using GRUs rather than LSTMs. The\nattention a; is computed similarly to (7) but using a simple inner product a; = softmax, h/ q\nrather than a trained bilinear form. Most significanlty, however, equations (9) and\nby the following where t \u20ac R(a, p) indicates that a reference to candidate answer a occurs at position\ntin p.\nPlalp,a,A) = So a\n\nteR(a,p)\n\nargmax y ay\na\n\ntE\u20acR(a,p)\n\n>\nll\na,. We also think of the vector representation q of the question as having the form [e(f), 0] and the\nvector embedding e,(a) as having the form 0, e\u2019,(a)].\nT c ift \u20ac R(a,p)\n\u20aco(a) he = 0 otherwise\nand hence (10) and (14) agree \u2014 the aggregation readers and the explicit reference readers are using\nessentially the same answer selection criterion.\nEmpirical evidence for is given in the first three rows of table [I] The first row empirically\nmeasures the \u201cconstant\u201d \u00a2 in (16) by measuring e9(a) \"hy for those cases where t \u20ac R(a,p). The\nsecond row measures \u201c0\u201d in (16) by measuring e,(a) 'h; in those cases where t \u00a2 R(a,p). Addi-\ntional evidence for is given in figure|1]showing that the output vectors e,(a) for different entity\nidentifiers a are nearly orthogonal. Orthogonality of the output vectors is required by provided\nthat each output vector e,(a) is in the span of the hidden state vectors h,,, for which t \u20ac R(a,p).\nIntuitively, the mean of all vectors h,,, with t \u20ac R(a,p) should be approximately equal to e,(a). Of\ncourse empirically this will only be approximately true.\nEquation would suggest that the vector embedding of the constant symbols should have di-\nmension at least as large as the number of distinct constants. However, in practice is sufficient that\ne(a)' e(a\u2019) is small for a 4 a\u2019. This allows the vector embeddings of the constants to have dimen-\nsion much smaller than the number of constants. We have experimented with two-sparse constant\nsymbol embeddings where the number of embedding vectors in dimention d is 2d(d \u2014 1) (d choose\n2 times the four ways of setting the signs of the non-zero coordinates). Although we do not report\nresults here, these designed and untrained constant embeddings worked reasonably well.\nUnfortunately, the decomposition of h; into this predication structure need not be axis aligned.\nRather than posit an axis-aligned concatenation we posit that the hidden vector space H is a possibly\nnon-aliened direct sum\nwhere S' is a subspace of \u201cstatement vectors\u201d and E is an orthogonal subspace of \u201centity pointers\u201d.\nEach hidden state vector h \u20ac H then has a unique decomposition as h = U+e for UW \u20ac Sande \u20ac E.\nThis is equivalent to saying that the hidden vector space H is some rotation of a concatenation of\nthe vector spaces S and EF.\nWe now present empirical evidence for this decomposition structure. We first note that the predi-\ncation decomposition implies that e,(a)' hy equals e,(a) ' e,(az). This suggests the following for\nsome fixed positive constant c.\nAssuming the predication structure we have c = ||e,(a)||?. We note that if different entity constants\nhad different norms then answers would be biased toward occurrences of the constant symbol of\n\nlarger norm. But we need to have that all constant symbols are equivalent. We note that (??) gives\ngmax eo(a)'o = argmax eo(a)' D> ayhy\na a\nt\n\n+\n= argmax ay \u20aco(a) hy = argmax a\nans Saye)\" aramax Sa\n\nt \u201c tER(a,p)\nCNN Dev\n\nsamples mean\n\n(a)\"hk, t\u20ac R(a,p) 222,001 10.66\n(a)'h,, t \u00a2 R(a,p) 93,072,682 -0.57\n\n(a)\"hiai, t\u20ac R(a,p) 443,878 2.32\n\nsine(q, hi), dat \u20ac R(a,p) 222,001 0.22\nsine(q,eo(a)), Va 103,909 -0.03\n\nvariance\n\n2.26\n1.59\n1.79\n\n0.11\n0.04\n\nCNN Test\n\nsamples\n\n164,746\n68,451,660\n329,366\n\n164,746\n78,411\n\nmean\n\n10.70\n-0.58\n2.25\n\n0.22\n-0.03\n\nvariance\n\n2.45\n1.65\n1.84\n\n0.12\n0.04\nq' (hi + \u20aco(a)) = q' hi.\n210\n100 oe 180\n\n150\n200 20\n\n90\n300\n\n60\n\n30\n400\n\n0\n500 ee ~30\n\noO 100 0 200 A200. 500\nFigure 1: Plot of e,(a;) ' e,(a;) from Stanford Reader trained on CNN dataset. Off-diagonal values\nhave mean 25.6 and variance 17.2 while diagonal values have mean 169 and variance 17.3.\nThis interpretation is exactly correct if some of the dimensions of the vector space correspond to\npredicates, V is a 0-1 vector representing a conjunction predicates, and \u00ae is also 0-1 on these di-\nmensions indicating whether a predicate is implied by the context. Of course in practice one expects\nthe dimension to be smaller than the number of possible predicates."}, {"section_index": "6", "section_name": "5 POINTER ANNOTATION READERS", "section_text": "It is of course important to note that anonymization provides reference information \u2014 anonymiza-\ntion assumes that one can determine coreference so as to replace coreferent phrases with the same\nentity identifier. Anonymization allows the reference set R(a, p) to be directly read off of the pas:\nsage. Still, an aggregation reader must learn to recover this explicit reference structure.\nAggregation readers can have difficulty when anonymization is not done. The Stanford Reade:\nachieves just better than 45% on Who-did-What dataset while Attention Sum Reader can get neat\n60%. But if we anonymize the Who-did-What dataset and then re-train the Stanford Reader, the\naccuracy jumps to near 65%. Anonymization has two effects. First, it greatly reduces the number:\nof output word e,(a) to be learned \u2014 we need only learn output embeddings for the relatively\nsmall number of entity identifiers needed. Second, anonymization suppresses the semantics of the\nreference phrases and leaves only a semantics-free entity identifier. This suppression of semantics\nmay facilitate the separation of the hidden state vector space H into a direct sum S @ E with g \u20ac S\nand e,(a) \u20ac E.\nWe can think of anonymization as providing additional linguistic input for the reader \u2014 it explicith\nmarks positions of candidate answers and establishes coreference. A natural question is whethe\nAs another testable predication we note that the posited decomposition of the hidden state vectors\nimplies\n\\ 5 TW ONSEE - NUS\nThis equation is equivalent to q'e,(a) = 0. Experimentally, however, we cannot expect q' e,(a)\nto be exactly zero and ( seems to provides a more experimentally meaningful test. Empirical\nevidence for is given in the fourth and fifth row of table[T] The fourth row measures the cosine\nof the angle between the question vector q and the hidden state h; averaged over passage positions\nt at which some entity identifier occurs. The fifth row measures the cosine of the angle between q\nand e,(a) averaged over the entity identifiers a.\nA question asks for a value of x such that a statement Wz] is implied by the passage. For a question\nY we might even suggest the following vectorial interpretation of entailment.\nP[x] implies Viz] iff &'W > |/WII,.\nTable 2: Accuracy on WDW dataset. All these results are based on single model. Results for neura\nreaders other than NSE are based on replications of those systems. All models were trained on th\nrelaxed training set which uniformly yields better performance than the restricted training set. Th\nfirst group of models are explicit reference models and the second group are aggregation models. 4\nindicates anonymization with better reference identifier.\nOne-Hot Pointer Annotation: The Stanford Reader involves both input embeddings of words and\noutput embeddings of entity identifiers. In the Who-did-What dataset each problem has at most five\nchoices in the multiple choice answer list. This means that we need only five entity identifiers and\nwe can use a five dimensional one-hot vector representation for answer identifiers. If an answer\nchoice exists at position \u00a2 in the passage let i, be the index of that choice on the choice list. If no\nchoice occurs \u00a2 take i; to be zero. Take e\u2019(i) to be the zero vector if i = 0 and otherwise to be the\none-hot vector for i. We defined pointer annotation to be the result of adding e\u2019(i,) as additional\nfeatures to the input embedding.\nWe then define a one-hot pointer reader by designates five dimensions of the hidden state as indica-\ntors of the answer and take the probability of choice 2 to be defined as\np(ild, q) = softmax 0;\np(i|d, q) = softmax [0, e'(i)|\"o\nIn the general pointer reader the pointer embeddings e\u2019(i) are held fixed and not trained\ne Binary feature: whether current token occurs in the question.\n\ne Real value feature: the frequency of current token in the passag\nthis information can be provided without anonymization by simply adding additional coreference\nfeatures to the input. Here we evaluate two architectures inspired by this question. This evaluation\nis done on the Who-did-What dataset which is not anonymized. In each architecture we add features\nto the input to mark the occurrences of candidate answers. These models are simpler than the\nStanford reader but perform comparably. This comparable performance in table 2] further supports\nour analysis of logical structure in aggregation readers.\ne(wr) = [e(wz), e\u2019 (iz)]\nGeneral Pointer Annotation: In the CNN dataset there are roughly 500 entity identifier and a one-\nhot representation is not desirable. Instead we can let e\u2019(i) be a fixed set of \u201cpointers vectors\u201d \u2014\nvectors distributed widely on the unit sphere so that for i # 7 we have that e\u2019(i) 'e/(j) is small. We\nagain use (1 i\nLinguistic Features. Each model can be modified to include additional input features for each input\ntoken in the question and passage. More specifically we can add the following features to the word\nembeddings.\nThe performance of various recent readers on CNN, DailyMail and CBTest are summarized in Table\nBl For purposes of comparison we only present results on single models. Model ensembles generally\nperform better than single models but are require more computation to train making comparison:\nmore difficult. More experimental details can be found in appendix.\nTable 3: Accuracy on CNN, DailyMail, CBTest NE and CBTest CN. All results are based on a single\nmodel. Results other than those involving pointer or linguistic feature annotations are taken fron\nthe original publications. Readers in the first group are explicit reference readers. Readers in the\nsecond group are aggregation readers. The final reader defies this classification.\nIn table 3} all the high-performance approaches are proposed very recently. Blue color represents\nthe second highest accuracy and bold font indicates the state-of-the-art accuracy. Note that the result\nof Stanford Reader we report here is the one without relabeling since relabeling procedure doesn\u2019t\n\nfollow the protocol used in/Hermann et al.|(2015)."}, {"section_index": "7", "section_name": "7 DISCUSSION", "section_text": "Explicit reference architectures rely on reference resolution \u2014 a specification of which phrases in\nthe given passage refer to candidate answers. Our experiments indicate that all existing readers ben-\nefit greatly from this externally provided information. Aggregation readers seem to demonstrate a\nstronger learning ability in that they essentially learn to mimic explicit reference readers by iden-\ntifying reference annotation and using it appropriately. This is done most clearly in the pointer\nreader architectures. Furthermore, we have argued for, and given experimental evidence for, an in-\nterpretation of aggregation readers as learning emergent logical structure \u2014 a factoring of neural\nrepresentations into a direct sum of a statement (predicate) representation and an entity (argument)\nrepresentation.\nReal value feature: position of the token\u2019s first occurrence in the passage as a percentage\nof the passage length.\n\nBinary feature: whether the text surrounding token match the text surrounding the place-\nholder in the question. We only have features for matching both left and right one word.\n\nOne hot vector: Part-of-speech (POS) tagging. We only use such feature on CBT dataset.\n\nOne hot vector: Name Entity Recognition (NER). We only use such feature on CBT dataset.\nOf course there is great interest in \u201clearning representations\u201d. The current state of the art in readin;\ncomprehension is such that systems still benefit from externally provided linguistic features includ\ning externally annotated reference resolution. It would seem desirable to develop fully automatec\nneural readers that perform as well as readers using externally provided annotations. It is of cours\nimportant to avoid straw man baselines when making any such claim."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thanks the support of NVIDIA Corporation with the donation of GPUs used for this work."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Danqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily\nmail reading comprehension task. In Proceedings of the ACL, 2016.\nYiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over\nattention neural networks for reading comprehension. Arxiv, 2016.\nPaperno. Denis, Germn Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi,\nSandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernndez. The lambada dataset:\nWord prediction requiring a broad discourse context. In Proceedings of the ACL. 2016.\nBhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. Gated-attention\nreaders for text comprehension. Arxiv, 2016.\nRudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the\nattention sum reader network. In Proceedings of the 54th Annual Meeting of the Association fo!\nComputational Linguistics, 1:908\u2014918, 2016.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings\nof the 3rd International Conference on Learning Representations, 2015.\nAt a very high level our analysis and experiments support a central role for reference resolution in\nreading comprehension. Automating reference resolution in neural models, and demonstrating its\nvalue on appropriate datasets, would seem to be an important area for future research.\nWe are hesitant to make any more detailed comments on the differences between the architectural\ndetails of the readers discussed in this paper. The differences in scores between the leading read-\ners are comparable to differences in scores that can be achieved by aggressive search over meta\nparameters or the statistical fluctuations in the quality of models learned by noisy statistical train-\ning procedures. More careful experiments over a longer period of time are needed. More dramatic\nimprovements in performance would of course provide better support for particular innovations.\nKarm Moritz Hermann, Tom Kocisk, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su-\nleyman, and Phil Blunsom. Teaching machines to read and comprehend. Jn Proceedings of the\nAdvances in Neural Information Processing Systems (NIPS), 2015.\nTsendsuren Munkhdalai and Hong Yu. Reasoning with memory augmented neural networks for\nlanguage comprehension. Arxiv, 2016.\nTakeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Who did what: A\nlarge-scale person-centered cloze dataset. In Proceedings of the EMNLP, 2016.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions\nfor machine comprehension of text. In Proceedings of International Conference on Empirical\nMethods in Natural Language Processing, 2016.\nPascanu Razvan, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neura\nnetworks. In Proceedings of ICML. pp. 1310-1318, 2013.\nMatthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for\nthe open-domain machine comprehension of text. In Proceedings of the Conference on Empirical\nMethods on Natural Laneuage Processing, 3:4\u201410, 2013.\nAndrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dy-\nnamics of learning in deep linear neural networks. Arxiv, 2013.\nYelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading\nin machine comprehension. Arxiv, 2016.\nAlessandro Sordonif, Phillip Bachmanf, and Yoshua Bengio. Iterative alternating neural attention\nfor machine reading. Arxiv, 2016.\nAdam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension\nwith the epireader. Arxiv, 2016.\nBart van Merrienboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-\nfarley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel : Frameworks for deep learning.\nArxiv, 2015.\nJason Weston, Sumit Chopra, and Antoine Bordes.\nJason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merrinboer, Armand\nJoulin, and Tomas Mikolov. Towards ai complete question answering: A set of prerequisite toy\ntacke Jn Procreedines of the 4th Internatinnal Conference on I earnine Renresentations 9016\nFre de ric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnauc\nBergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements\nNIPS Workshop Deep Learning and Unsupervised Feature Learning, 2012.\nsainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks.\n\nTm Advanpove in norewsl infaematan nenrvodna evetoeine wn JAAN IAAQ IWN16\nFor Stanford Reader and One-Hot Pointer Reader, we simply follows the Stanford Reader\u2019s set\nting and didn\u2019t tune it on each dataset. For gated attention reader, the lookup table was ran\ndomly initialized with uniform distribution from the interval [-0.2, 0.2] on CBT dataset, but o1\nCNN&DailyMail, the lookup table was initialized by Glove vector (Jeffrey et al|| trainec\non the train&validatation set (we found that the pre-trained word vector doesn\u2019t improve the ac\ncuracy but will accelerate the training) on CNN&DailyMail. On WDW dataset, the lookup tabl.\nwas initialized by pre-trained Glove vector] It should be noticed that if we initialize the lookuy\ntable with pre-trained Glove vector from //nlp.stanford.edu/data/glove.6B.zip, it will slightly boos\nthe accuracy compared with using the Glove vector trained on train&validation set. Input to hidder\nstate weights were initialized by random orthogonal matrices and biases wer\ninitialized to zero. Hidden to hidden state weights were initialized by identity matrices to force the\nmodel can remember longer information. To compute the attention weight, we a, = h, TW. anc\ninitialize W,, with random uniform distribution. We also used the gradient clipping (Razvan et al.\n9013) with threshold of 10 and batches of size 32.\nDuring training we randomly shuffled all examples within each epoch. To speedup training, we\nalways pre-fetched 10 batches worth of examples and sorted them according to document length as\n\ndid by [Kadlec et al.| (2016). When trained on CNN, DailyMail and WDW (anonymization case)\ndataset, we randomly reshuffled the entity identifier to match the procedure proposed in\nDuring training we evaluated the accuracy after each epoch and stopped the training when the accu-\nracy on the validation set started decreasing. We tried limiting the vocabulary to the most frequent\ntokens but didn\u2019t observed any performance improvement compared with using all the distinct tokens\nas vocabulary. Since part of our experiments need to check the word embedding assignment issues,\nfinally we use all the distinct tokens as vocabulary. To find the optimal embedding and hidden state\ndimension, we tried several groups of different combinations, the optimal value and corresponding\ntraining statistics in Gated Attention readers are summarized in Table. |4| When anonymize the Who-\ndid-What dataset, we can either use simple string match to replace answer in question and story with\nentity identifier, or we can use Name Entity Recognition(NER) tools\u201d|to detect name entities and\nthen replace the answer name entities in question and story with entity identifier, we found the later\none generally will bring 2 % improvement compared with simple string match. More experimental\ndetails can be found in code.\nTable 4: Training Details on Different Datasets\nDataset Embedding | Hidden State | Time Per Epoch | Trained Epochs | K\nCNN 128 256 18 hours 5 3\nDailyMail 128 256 2 days 5 3\nWDW Relaxed 200 384 2.5 hours 8 1\nCBT NE 384 384 I hour 8 I\nCBT CN 384 256 T hour 7 I\nWe randomly choose one article from CNN dataset and show softmax(e,(a)h;) for t \u20ac [0, |p|] fo\neach answer candidate a in figure[2] figure[3] figure[4] figure[5]and figure[6] Red color indicate:\n*http://nlp.stanford.edu/data/glove.6B.zip\n3http://nlp.stanford.edu/software/CRF-NER.shtm\nlarger probability and orange indicates smaller probability and the remaining indicates very low\nprobability that can be ignored. From those figures, we can see that our assumption that e,(a) is\nused to pick up its occurrence is reasonable.\nFigure 2: Heat map of softmax(e,(a)h;) when a = entity0\n@entity0 ( @entity1 ) six survivors of the @entityO kosher supermarket siege in january are suing a\n@entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking .\naccording to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a\npreliminary investigation was opened by the prosecutor 's office wednesday . the media outlet ,\n@entity! affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding\nin a cold room during the attack , by broadcasting their location live during the siege . @entity23 ina\nstatement friday said one of its journalists \" mentioned only once the presence of a woman hidden\ninside the @entity27 , on the basis of police sources on the ground .\"\" immediately , the chief editor\nfelt that this information should not be released . it therefore has subsequently never been repeated\non air or posted on - screen . @entity16 regrets that the mention of this information could cause\nconcern to the hostages , as well as their relatives , that their lives were in danger , \" the statement\nsaid . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27\n@entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in\nthe police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born\n@entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15\ncustomers from @entity47 in the cold room . the hostage - taking was the culmination of three days\nof terror in @entityO that began with the january 7 shooting of 12 people at the offices of @entity5\nsatirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 ,\nwere killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the\nlives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported\nfrom @entityO , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this\nreport .\nquery: they hid in a cold room during the attack in @entityO by gunman @placeholder\nWe randomly choose one article from CNN dataset and show the attention map a; =\nsoftmax(qT W,hz) for different readers (in Attention Sum and Gated Attention Reader, W,, is iden-\ntity matrix). From figure? figure[8]and figure[9] we can see that different readers essential put the\nweights on the entity identifiers.\nFigure 4: Heat map of softmax(e,(a)h;) when a = entity16.\n@entity0 ( @entity1 ) six survivors of the @entityO kosher supermarket siege in january are suing a\n@entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking .\naccording to @entity0 prosecutor 's spokeswoman @entity 10 , the lawsuit was filed march 27 anda\npreliminary investigation was opened by the prosecutor 's office wednesday . the media outlet ,\n@entity| affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding\nin a cold room during the attack , by broadcasting their location live during the siege . @entity23 ina\nstatement friday said one of its journalists \" mentioned only once the presence of a woman hidden\ninside the @entity27 , on the basis of police sources on the ground .\"\" immediately , the chief editor\nfelt that this information should not be released . it therefore has subsequently never been repeated\non air or posted on - screen . @entity16 regrets that the mention of this information could cause\nconcern to the hostages , as well as their relatives , that their lives were in danger , \" the statement\nsaid . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27\n@entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in\nthe police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born\n@entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15\ncustomers from @entity47 in the cold room . the hostage - taking was the culmination of three days\nof terror in @entityO that began with the january 7 shooting of 12 people at the offices of @entityS\nsatirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 ,\nwere killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the\nlives of 17 people and put @entity5 on a heightened state of alert .@entity| 's @entity80 reported\nfrom @entityO , and @entity81 wrote from @entity82 . @entity| 's @entity83 contributed to this\nreport .\n\na gone, bid te on RAD ee eee a: rr, a rr re ee oe or\njuery: they hid in a cold room during the attack in @entityO by gunman @placeholde:\n@entity0 ( @entity1 ) six survivors of the @entityO kosher supermarket siege in january are suing a\n@entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking .\naccording to @entity0 prosecutor 's spokeswoman @entity 10 , the lawsuit was filed march 27 anda\npreliminary investigation was opened by the prosecutor 's office wednesday . the media outlet ,\n@entity! affiliate @entity!6_, is accused of endangering the lives of the hostages , who were hiding\nin a cold room during the attack , by broadcasting their location live during the siege . @entity23 ina\nstatement friday said one of its journalists \" mentioned only once the presence of a woman hidden\ninside the @entity27 , on the basis of police sources on the ground .\"\" immediately , the chief editor\nfelt that this information should not be released . it therefore has subsequently never been repeated\non air or posted on - screen . @entity16 regrets that the mention of this information could cause\nconcern to the hostages , as well as their relatives , that their lives were in danger , \" the statement\nsaid . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27\n@entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in\nthe police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born\n@entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15\ncustomers from @entity47 in the cold room . the hostage - taking was the culmination of three days\nof terror in @entityO that began with the january 7 shooting of 12 people at the offices of @entityS\nsatirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 ,\nwere killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the\nlives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported\nfrom @entityO , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this\nreport\nquery: they hid in a cold room during the attack in @entityO by gunman @placeholder\nFigure 6: Heat map of softmax(e,(a)h;) when a = entity47.\n@entity0 ( @entity1 ) six survivors of the @entityO kosher supermarket siege in january are suing a\n@entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking .\naccording to @entity0 prosecutor 's spokeswoman @entity 10 , the lawsuit was filed march 27 anda\npreliminary investigation was opened by the prosecutor 's office wednesday . the media outlet ,\n@entity! affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding\nin a cold room during the attack , by broadcasting their location live during the siege . @entity23 ina\nstatement friday said one of its journalists \" mentioned only once the presence of a woman hidden\ninside the @entity27_, on the basis of police sources on the ground .\"\" immediately , the chief editor\nfelt that this information should not be released . it therefore has subsequently never been repeated\non air or posted on - screen . @entity16 regrets that the mention of this information could cause\nconcern to the hostages , as well as their relatives , that their lives were in danger , \" the statement\nsaid . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27\n@entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in\nthe police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born\n@entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15\ncustomers from @entity47 in the cold room . the hostage - taking was the culmination of three days\nof terror in @entityO that began with the january 7 shooting of 12 people at the offices of @entityS\nsatirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 ,\nwere killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the\nlives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported\nfrom @entityO , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this\nreport .\n\na gone, bid te on RAD ee eee a: rr, a rr re ee oe or\nquery: they hid in a cold room during the attack in @entityO by gunman @placeholder\n@entity0 ( @entity1 ) six survivors of the @entityO kosher supermarket siege in january are suing a\n@entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking .\naccording to @entity0 prosecutor 's spokeswoman @entity 10 , the lawsuit was filed march 27 anda\npreliminary investigation was opened by the prosecutor 's office wednesday . the media outlet ,\n@entity! affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding\nin a cold room during the attack , by broadcasting their location live during the siege . @entity23 ina\nstatement friday said one of its journalists \" mentioned only once the presence of a woman hidden\ninside the @entity27 , on the basis of police sources on the ground .\"\" immediately , the chief edito1\nfelt that this information should not be released . it therefore has subsequently never been repeated\non air or posted on - screen . @entity16 regrets that the mention of this information could cause\nconcern to the hostages , as well as their relatives , that their lives were in danger , \" the statement\nsaid . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27\n@entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in\nthe police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born\n@entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15\ncustomers from @entity47 in the cold room . the hostage - taking was the culmination of three days\nof terror in @entityO that began with the january 7 shooting of 12 people at the offices of @entityS\nsatirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 ,\nwere killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the\nlives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported\nfrom @entityO , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this\nreport .\nquery: they hid in a cold room during the attack in @entityO by gunman @placeholder\n( @entity3 ) suspected @entity2 militants this week attacked civilians inside @entityS for the first\ntime in a month , killing at least 16 villagers , a military spokesman told @entity3 saturday . six\nattackers were killed by @entity5 forces , said maj. @entity10 , an operations officer with a special\nmilitary unit set up to fight @entity2 . the attackers came thursday \" in the hundreds ... torched\n@entity14 village in the @entity15 ,\" he said . @entity14 is a village that borders @entity17 and\nhas been identified as a recruiting ground for @entity2 . regional gov. @entity!9 said the insurgents\nhave been attacking border villages in @entity5 in search of supplies . @entity5 troops retook cattle\nthat was stolen by the attackers in @entity14 , @entity10 said . the last attack in @entity5 by the\n@entity29 - based militants was march 10 , when the assailants struck the locality of @entity32 in a\nfailed attempt to overrun a military base . @entity2 , whose name translates as \" @entity44\neducation is sin , \" has been waging a years - long campaign of terror aimed at instituting its extreme\nversion of @entity42 law in @entity29 . @entity2 's tactics have intensified in recent years , from\nbattling @entity29 government soldiers to acts disproportionately affecting civilians -- such as raids\non villages , mass kidnappings , assassinations , market bombings and attacks on churches and\nunaffiliated mosques . much of this violence has taken place in @entity29 , but neighboring\ncountries -- @entityS included -- have also been hit increasingly hard . journalist @entity61 in\n@entity63 , @entityS , contributed to this report .\nquery: @placeholder is based in @entity29 but has attacked across the border of several neighbors\n( @entity3 ) suspected @entity2 militants this week attacked civilians inside @entityS for the first\ntime in a month , killing at least 16 villagers , a military spokesman told @entity3 saturday . six\nattackers were killed by @entity5 forces , said maj. @entity10 , an operations officer with a special\nmilitary unit set up to fight @entity2 . the attackers came thursday \" in the hundreds ... torched\n@entity14 village in the @entity15 ,\" he said . @entity14 is a village that borders @entity17 and\nhas been identified as a recruiting ground for @entity2 . regional gov. @entity19 said the insurgents\nhave been attacking border villages in @entity5 in search of supplies . @entity5 troops retook cattle\nthat was stolen by the attackers in @entity14 , @entity10 said . the last attack in @entity5 by the\n@entity29 - based militants was march 10 , when the assailants struck the locality of @entity32 in a\nfailed attempt to overrun a military base . @entity2 , whose name translates as \" @entity44\neducation is sin , \" has been waging a years - long campaign of terror aimed at instituting its extreme\nversion of @entity42 law in @entity29 . @entity2 's tactics have intensified in recent years , from\nbattling @entity29 government soldiers to acts disproportionately affecting civilians -- such as raids\non villages , mass kidnappings , assassinations , market bombings and attacks on churches and\nunaffiliated mosques . much of this violence has taken place in @entity29 , but neighboring\ncountries -- @entityS included -- have also been hit increasingly hard . journalist @entity61 in\n@entity63 . @entity5 . contributed to this report .\nquery: @placeholder is based in @entity29 but has attacked across the border of several neighbors\nFigure 8: Heat map a; for Gated Attention Reader\n( @entity3 ) suspected @entity2 militants this week attacked civilians inside @entityS for the first\ntime in a month , killing at least 16 villagers , a military spokesman told @entity3 saturday . six\nattackers were killed by @entity5 forces , said maj. @entity10 , an operations officer with a special\nmilitary unit set up to fight @entity2 . the attackers came thursday \" in the hundreds ... torched\n@entity14 village in the @entity15 ,\" he said . @entity14 is a village that borders @entity17 and\nhas been identified as a recruiting ground for @entity2 . regional gov. @entity!9 said the insurgents\nhave been attacking border villages in @entity5 in search of supplies . @entity5 troops retook cattle\nthat was stolen by the attackers in @entity14 , @entity10 said . the last attack in @entity5 by the\n@entity29 - based militants was march 10 , when the assailants struck the locality of @entity32 in a\nfailed attempt to overrun a military base . @entity2 , whose name translates as \" @entity44\neducation is sin , \" has been waging a years - long campaign of terror aimed at instituting its extreme\nversion of @entity42 law in @entity29 . @entity2 's tactics have intensified in recent years , from\nbattling @entity29 government soldiers to acts disproportionately affecting civilians -- such as raids\non villages , mass kidnappings , assassinations , market bombings and attacks on churches and\nunaffiliated mosques . much of this violence has taken place in @entity29 , but neighboring\ncountries -- @entity5 included -- have also been hit increasingly hard . journalist @entity61 in\n@entity63 , @entity5 , contributed to this report .\nquery: @placeholder is based in @entity29 but has attacked across the border of several neighbors"}]
r1LXit5ee
[{"section_index": "0", "section_name": "EPISODIC EXPLORATION FOR DEEP DETERMINISTIC\nPOLICIES FOR STARCRAFT MICROMANAGEMENT", "section_text": "Nicolas Usunier*, Gabriel Synnaeve*, Zeming Lin, Soumith Chintala\nFacebook AI Research\n{usunier, gab, zlin, soumith}@fb.com\nWe consider scenarios from the real-time strategy game StarCraft as benchmarks\nfor reinforcement learning algorithms. We focus on micromanagement, that is, the\nshort-term, low-level control of team members during a battle. We propose several\nscenarios that are challenging for reinforcement learning algorithms because the\nstate- action space is very large, and there is no obvious feature representation fot\nthe value functions. We describe our approach to tackle the micromanagement\nscenarios with deep neural network controllers from raw state features given by\nthe game engine. We also present a heuristic reinforcement learning algorithrr\nwhich combines direct exploration in the policy space and backpropagation. This\nalgorithm collects traces for learning using deterministic policies, which appears\nmuch more efficient than, e.g., e-greedy exploration. Experiments show that this\nalgorithm allows to successfully learn non-trivial strategies for scenarios with\narmies of up to 15 agents, where both Q-learning and REINFORCE struggle."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "StarCraft ) is a real-time strategy (RTS) game in which each player must build an army and control\nindividual units to destroy the opponent\u2019s army. As of today, StarCraft is considered one of the\nmost difficult games for computers, and the best bots only reach the level of high amateur human\nplayers (Churchill] (2015). The main difficulty comes from the need to control a large number of\nunits in partially observable environment, with very large state and action spaces: for example, in a\ntypical game, there are at least 101\u00b0\u00b0\u00b0 possible states whereas the game of Go has about 10!\u201d\u00b0 states\nBecause of simultaneous and durative actions, StarCraft provides an ideal environment to study the\ncontrol of many agents at large scale, and an opportunity to define tasks of increasing difficulty, from\nmicromanagement, which concerns the short-term, low-level control of fighting units during battles\nto long-term strategic and hierarchical planning under uncertainty. While building a controller for the\nfull game based on machine learning is out-of-reach with current methods, we propose, as a first step\nto study reinforcement learning (RL) algorithms in micromanagement scenarios in StarCraft.\nBoth the work on Atari games (Mnih et al.||2013) and the recent Minecraft scenarios studied by\nresearchers (Oh et al.}/2016) focus on the control of a single agent, with a fixed\nlimited set of actions. Coherently controlling multiple agents (units) is the main challenge o\nreinforcement learning for micromanagement tasks. This comes with two main challenges. The firs\none is to efficiently explore the large action space. The implementation of a coherent strategy require:\nthe units to take actions that depend on each other, but it also implies that any small alteration of :\nstrategy must be maintained for a sufficiently long time to properly evaluate the long-term effect o\nthat change. In contrast to this requirement of consistency in exploration, the reinforcement learnin;\nalgorithms that have been successful in training deep neural network policies such as Q-learnin;\nee & Dayan| 1992} Sutton & Barto 1998) and REINFORCE (Williams 1992} Deisenroth et al\n\n, perform exploration by randomizing actions. In the case of micromanagement, randomizins\nactions mainly disorganizes the units, which then rapidly lose the battle without collecting relevan"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "feedback. The second challenge of micromanagement is that there is no obvious way to parameterize\nthe policy given the state and the actions, because actions are relations between entities of the state.\ne.g. (unit A, attack, unit B) or (unit A, move, position B) and are not restricted to a few constant\nsymbols such as \u201cmove left\u201d or \u201cmove right\u201d. Multi-class architectures, such as these used for Atari\ngames (Mnih et al.|/2015), cannot evaluate actions that are parameterized by an entity of the state.\nThe contribution of this paper is twofold. First, we propose several micromanagement tasks fron\nStarCraft (Section Bp. then we describe our approach to tackle them and evaluate well know:\nreinforcement learning algorithms on these tasks (Section[4p. In particular, we present an approacl\nof greedy inference to break out the complexity of taking the actions at each step. We also describ\nthe features used to jointly represent states and actions, as well as a deep neural network model fo\nthe policy (Section[5). Second, we propose the zero order (ZO) reinforcement learning algorithm t\naddress the difficulty of exploration in these tasks (Section|6). Compared to algorithms for efficien\ndirect exploration in parameter space, the novelty of our algorithm is to explore directly in polic\u2019\nspace by mixing parameter randomization and plain gradient descent.\nAlgorithms that have been used to train deep neural network controllers in reinforcement learnin:\ninclude Q-learning (Watkins & Dayan\\|1992) } Mnih et al. | 2015), the method of temporal difference\n(Sutton| Sutton} /1988} \u2018Tesauro| (Tesauro||1995), policy gradient and their variants (Williams| SLE Deisenroth et a\n\n2013), and actor/critic architectures (Barto et al.|\ndeterministic policy gradient (DPG) (Silver et al.| ), these cee ue on randomizing th\nactions at each step for exploration. DPG collects traces by following deterministic policies tha\nremain constant throughout an episode, but can only be applied when the action space is continuou\nHausknecht & Stone apply DPG with paramterized action spaces, in which discrete action\n(e.g. \u201cmove\u201d\u2019) are parameterized by continuous variables (e.g. the target location). Our work is mos\nclosely related to works that explore the parameter space of policies rather than the action spac\nSeveral approaches have been proposed that randomize the parameters of the policy at the beginnin:\nof an episode and run a deterministic policy throughout the entire episode, borrowing ideas fron\ngradient-free optimization, e.g. 2006\nHowever, these algorithms rely on gradient-free optimization for all parameters, which does no\nscale well with the number of parameters. [Osband et al.|(2016b) describe another type of algorithr\nwhere the parameters of a deterministic policy are randomized at the beginning of an episode, an\nlearn a posterior distribution over the parameters as in Thomson sampling (Thompson}|1933). Thei\napproach was proved to be efficient, but applies only to linear functions and scales quadratically wit\nthe number of parameters. The bootstrapped deep Q-networks (BDQN) are\npractical implementation of the ideas of 2 for deep neural networks. Howeve\nBDQN still performs exploration in the action space at the beginning of the training, and there is n\nrandomization of the parameters. BDQN keeps several versions of the last layer of the deep neura\nnetwork, and selects a single version per episode to perform Q-learning updates, while it ensemble\nall such \u201cheads\u201d as test time. In contrast, we randomize the parameters of the last layer once at th\nbeginning of each episode and do not rely of estimates of a state-action value function.\nMost of\n\nMulti-agent reinforcement learning has been an active area of research (Busoniu et al.\nthe focus has been on learning agents in competitive environments with adaptive adversari\n\n1994} Hu & Wellman 1998} Tesauro 2003). Some work has looked at learning control policies for\n|1993}\n\nBernstein)\n\nindividual agents in a collaborative setting with communication constraints (Tan|\n, and methods\n\n2), with applications such as soccer robot control (Stone & Veloso]\nsuch as hierarchical reinforcement learning for communicating high-level goals (Ghavamzadeh|\n, or learning an efficient communication protocol (2016). While the\ndecentralized control framework is most likely relevant for playing full games of StarCraft, here we\navoid the difficulty of imperfect information, therefore we use the multi-agent structure only as a\nmeans to structure the action space. As in the approach of ) with reinforcement\nlearning for structured output prediction, we use a greedy sequential inference scheme at each time\nframe: each unit decides on its action based solely on the state combined with the actions of units\nthat came before it in the sequence.\nIn the context of RTS micromanagement, a large spectrum of AI approaches have been studied\nThere has been work on Bayesian fusion of hand-designed influence maps (Synnaeve & Bessiere\n[2011), fast heuristic search in a simplified simulator (Churchill et al.||2012), and even evolutionary\noptimization {Liu et al} 2074). Overmind (Klein et al.{[2010) used threat-aware A* pathing and\nRL-tuned potential fields. Closer to this work, (2005) employ concurrent hierarchical\n\nQ-learning (units Q-functions are combined at the group level), (2012) successfully\napplied tabular Q-learning (Watkins & Dayan|{1992) and SARSA [1998), with anc\nwithout experience replay (\u201celigibility traces\u201d), with a reward similar to the one used in several of\nour experiments. However, the action space was reduced to pre-computed \u201cmeta-actions\u201d: fight\nand retreat, and the features were hand-crafted. None of these approaches are used as is in existing\nStarCraft bots, for a lack of robustness, completeness (both can be attributed to hand-crafting), o1\ncomputational efficiency. For a more detailed overview of AI research on StarCraft, the reader shoulc\n\nconsult (Ontanon et al."}, {"section_index": "3", "section_name": "3. ~=STARCRAFT MICROMANAGEMENT SCENARIOS", "section_text": "For all these scenarios, a human expert can win 100% of the time against the built-in AI, by moving\naway units that are hurt (thus conserving firepower) and with proper focus firing.\nFormalism The environment is approximated as a Markov Decision process (MDP), with a finite set\nof states denoted by S. Each state s has a set of units /(s), and a policy has to issue a command c \u20ac C\nto each of them. The set of commands is finite. An action in that MDP is represented as a sequence of\n(unit, command) pairs a = ((uy, C1), --; (us|, \u20ac]s|)) Such that {uy, ..., ws} = U(s). |s| denotes the\nnumber of units in state s and A(s) = (U(s)xC)!*! the set of actions in state s. We denote by p(s\u2019|s, a)\nthe transition probability of the MDP and by p; the probability distribution of initial states. When\nthere is a transition from state s\u2019 to a state s\u2018t1, the agent receives the reward r'++ = r(s\u2018, s'++),\nwhere r : S X S \u2014 R is the reward function. We assume that commands are received and\nWe focus on micromanagement, which consists of optimizing each unit\u2019s actions during a battle\nThe tasks presented in this paper represent only a subset of the complexity of playing StarCraft. As\nStarCraft is a real-time strategy (RTS) game, actions are durative (are not fully executed on the next\nframe), and there are approximately 24 frames per second. As we take an action for each unit every\nfew frames (e.g. every 9 frames here, more details can be found in Appendix |[Dp, we only consider\nactions that can be executed in this time frame, which are: the 8 move directions, holding the current\nposition, an attack action for each of the existing enemy units. During training, we always control all\nunits from one side, and the opponent (built-in AI in the experiments) is attacking us:\nm5v5 is a task in which we control 5 Marines (ranged ground unit), against 5 opponent Marines\nA good strategy here is to focus fire, e.g. order all Marines to attack a single opponent.\n\nm15v16: same as above, except we have 15 Marines and the opponent has 16. A good strategy\nhere is also to focus fire, while avoiding \u201coverkill.\u201d 7 Marines attacking simultaneously kills ar\nopponent in a single volley, so using more marines to simultaneously target an enemy causes\nattacks to be wasted, resulting in \u201coverkill.\u201d\n\ndragoons_zealots: symmetric armies with two types of units: 3 Zealots (melee ground\nunit) and 2 Dragoons (ranged ground unit). Here a strategy requires to focus fire, and if possible\nto 1) not spend too much time having the Zealots walk instead of fight, 2) focus the Dragoons\nwho die more easily but deal more damage.\n\nw15v17: we control 15 Wraiths (ranged flying unit) while the opponent has 17. Flying units\nhave no \u201ccollision\u201d, so multiple units can occupy the same tile and reach their target more quickly\nIt only takes 6 wraiths to kill an opponent in a single volley. Hence, it is important not to \u201coverkill\non this map.\n\nother mXvY or wXvyY scenarios. The 4 scenarios above are the ones on which we train our models\nbut they can learn strategies that overfit a given number of units, so we have similar scenarios but\nwith different numbers of units (on each side).\nThe \u201cgreedy\u201d MDP One way to break out the complexity of jointly inferring the commands to\neach individual unit is to perform greedy inference at each step: at each state, units choose a command\none by one, knowing the commands that were previously taken by other units. Learning a greedy\npolicy boils down to learning a policy in another MDP with fewer actions per state but exponentially\nmore states, where the additional states correspond to the intermediate steps of the greedy inference\nThis reduction was previously proposed in the context of structured prediction by[Maes et al.|\nwho proved that an optimal policy in this new MDP has the same cumulative reward as an optimal\npolicy in the original MDP. We expand on this in Appendix |B]\nNormalized cumulative rewards Immediate rewards are necessary to provide feedback that guides\nexploration. In the case of micromanagement, a natural reward signal is the difference between\ndamage inflicted and incurred between two states. The cumulative reward over an episode is the\ntotal damage inflicted minus the total damage incurred along the episode. However, the scale of this\nquantity heavily depends on the number of units (both our units and enemy units, which significantly\ndecreases along an episode) that are present in the state. Without proper normalization with respect\nto the number of units in the current state z(s), learning will be artificially biased towards the large\nimmediate rewards at the beginning of the episode. Then, instead of considering cumulative rewards\nfrom a starting state s\u2018, we define normalized cumulative rewards 77 as the following recursive\ncomputation over an episode:\nWe use the sum of maximum hit points of all units in the state s\u2019 as normalization factor z(s\u2019), which\nimplies that nt Tel[- 0.5, _ 5]. One way to look at this normalization process is to consider that the\n2(stt\n\nreward is a and = plays the role of an (adaptive) discount factor, which is chosen to be at\nmost 1, and strictly smaller than 1 when the number of units change.\nFor policy gradient and our algorithm described in section|6} we directly use n\"* . We describe i\nAppendix|C]how we adapted the update rule for Q-learning.\nWe describe in this section the features and the neural network architecture we use to parameteriz\u00ab\nthe policy. Since we consider the greedy inference described in the previous section, the underlying\nMDP will contain states of the form 5 = (s,a1,.~,Ux+1), Where: s is the current state of the gam\ngiven by the game engine, k is the number of units which already \u201cplayed\u201d at this frame, aj, i:\nthe sequence of the k pairs (unit, command) that correspond to the k commands the have already\nbeen chosen, and finally w;,41 is the unit to play. For each unit, we consider two types of commands\n(1) attack a given enemy unit, and (2) move to a specific position. In order to reduce the number o:\npossible move commands, we only consider 9 move commands, which either correspond to a move\nin one of the 8 basic directions, or staying at the same position.\nThere are several challenges to represent states and actions in RTS games:\n\u00bb The number of units and actions are not bound a priori and varies in time\n\u00bb Commands must be evaluated in context of all currently executing commanc\n\u00bb Attack actions must resolve the reference to its target\nTo address the first two challenges, we adopt an approach based on a joint encoding of states\nand commands. Denoting by \u00a7 = (s,a1..%,Ux+1) the current state of the greedy MDP and c <\nexecuted concurrently, so that the order of commands in an action does not alter the transition\nprobabilities. Finally, we consider the episodic reinforcement learning scenario, with finite horizon\nT and undiscounted rewards. The learner has to learn a (stochastic) policy 7(a|s), which defines\na probability distribution over actions in A(s) for every s \u20ac 7 The objective i is to maximize the\nexpected undiscounted cumulative reward over r episodes R(x = ED r(s', st1)) = E[r!-7],\nwhere the expectation is taken with respect to st ~ p,, s'ti ~ an ,s') anda! ~ r(.|s)\nVt\u20ac {1,...,0\u2014lp ah T = ACD Le\nz(st)\nTable 1: Unit features as given by the game engine, their abbreviated name and their type: cat. mean:\nthe feature is caterogical and 1-hot encoded, real-valued features comme with the re-scaling constant\nhit points shield cooldown is enemy unit type\n(hp, \u20ac R, /20) (shield, \u20ac R, /20) (cd, \u20ac R, /10) (nmy, bool) (type, cat.)\n\nposition previous target chosen target prev. cmd type chosen cmd type\n(pos, \u20ac R?, /20) (tgt_pos, \u20ac R?, /20) (next_pos, \u20ac R?, /20) (prev_cmd, cat.) (next_cmd, cat.)\ncandidate action, we learn the parameters w and @ of a (state, command) value function of the forn\nf(3,c) = (w, Vo(S,c)) where w \u20ac R4 and W\u00a2(S,c) is the output of a embedding network tha\nmaps (state, command) pairs to R%, with parameters @. In Q-learning and our algorithm presentec\nin the next section, we directly use f as the state/action value function, whereas in policy gradien\n\nthe probability to take command c in state \u00a7 is given by the Gibbs distribution over f(5,c) with\noo fF (%,c)/r\nTo tackle the last challenge, we identify units with their (x, y)\ncoordinates in the map. We add two fields to the unit features\nthat contain the coordinates of their corresponding target, or its\nown location if it does not have a target. To evaluate a command\nc = (<actor unit>, <attack or move>, <target>), we compute\npairwise distances between the actor and the target. Note that\nwith this kind of representation, the input of the embedding\nnetwork Wy is a joint representation of the state s and the com:\nmand c to evaluate. A complete list of unit features is given\nin Table[I] Hit points are the remaining life points of the unit\nshield corresponds to additional hit points that are not affected\nby armor and regenerate slowly, cooldown is the time to wait\nuntil damages can be inflicted.\nThe full scoring approach is depicted in Figure [I] In our ap-\nproach, a state is represented as a list of units. The raw features\nare transformed by a featurizer that 1) takes the 3 unit features\n(pos, tgt_pos and next_pos) and computes their distances with\nthe position the acting unit and its target (pos, and tgt,). All 4\ncategorical variables are passed through a 10-dimensional linear\nembedding (not shown in figure). In addition to the 4 real valued\nunit features, we have a 40 dimensional feature vector per unit\nas input to our network.\nEach unit feature vector then goes through the unit-level embed-\nding network. We then concatenate the max and mean poolings\nacross units with an embedding of the command type. Then,\nthe resultant 210 dimensional vector is passed through a final\nstate-command embedding network. Both the unit-level and\nstate-command embedding networks have a hidden dimension\nof 100, and ELU nonlinearities in the intermediate layer\nert et al.| [2015). We use tanh for the final unit-level networ\nnonlinearty, and a ReLU for the final state-command network\nnonlinearity. We did not extensively experiment with the struc-\nture of the network, but we found the maxpooling and tanh\nnonlinearity to be particularly important.\nThe advantage of this approach is to rely on raw features only, and does not require any encoding o!\nthe game dynamics, in contrast to previous works on RL for micromanagement (see e.g. (Wende\n& Watson| )) that used domain knowledge handcrafted in the features (such as the damages\ninflicted by an attack). The distance-based encoding is also a simple way to represent the differen\nrelationships between units that correspond to previous/chosen attacks.\nRaw inputs\n\ncmd\npos: F\ntgt featurizer\n\ntype.\n\nFeatures\n\nLinear (40x 100)\nELU\n\nLinear (100x100)\nTanh\n\nUnit-level\nembedding\n\npooling\n\nEmbedding (dim 10) Max Mean\n\nState-command\n\nLinear (210x100)\nELU\n\nembedding\n\nLinear (100x100)\nReLU\n\nunit\nhp\npos\n\ntgt_pos\n\nhp\nd(pos, pos.)\nd(pos, tgt.)\nFigure 1: Representation of the\njoint (state, command) featuriza-\ntion and scoring process."}, {"section_index": "4", "section_name": ". COMBINING BACKPROPAGATION AND ZERO-ORDER OPTIMIZATION", "section_text": "Our preliminary experiments with Q-learning or REINFORCE made it clear that structured explo\nration was necessary to learn non-trivial strategies with substantial armies. The randomization o:\nactions lead to the disorganization of the army and a rapid defeat, which prevents the algorithm:\nfrom evaluating alterations to the current policy in the long run. Whereas gradient-free optimizatior\nthat performs episode-based exploration (e.g.|Mannor et al. (2003); Sehnke et al.|(2010)) would be\nvalid choice, it only scales to few parameters. Preliminary experiments with direct exploration in the\nparameter space of the deep neural network confirmed that a more efficient scheme was needed.\nThe deterministic policy we consider takes action a in state s according to the rule\nTw,o(S) = argmax(w, Vo(s, a))\nacA(s)\nWe use the notation (s, a) for state and actions in an MDP for the presentation of the algorithm, even\nthough in our experiments we use it with states \u00a7 of the greedy MDP and unit-level commands c.\n\nLikewise, we describe the algorithm in the standard cumulative reward setup, while in our experiments\nwe use the normalized cumulative rewards.\nThis form of policy naturally allows to perform structured exploration by only randomizing parts of\nthe network. More specifically, the parameters w of the last layer affect all states and actions in a\nsimilar way along an episode. The approach we follow is then to perform gradient-free optimization\non these parameters w only. Following stochastic methods for zero-th order optimization (Kiefer\n\net al. Nemirovsky et al. pa 7}|Duchi et al. adimi & Lan\n1.) 1952 i Kk 1.) }1982)|Spall}|199 hi 1.) /2013}|Ghadimi &\nER? 6 f(x) c:\n\ngradient of a differentiable function x x) can be estimated by\nVi ln) \u00a9 E[Sf(0 + dual,\nwe use u rather than r*-'u, which corresponds to the gradient of the average cumulative\nreward over the episode. We did not observe a large difference in preliminary experiments.\nThe overall algorithm is described in Algorithm|1] At the beginning of an episode, a perturbation wu is\nsampled from the unit sphere of IR? and the policy s ++ 7,,+5u,9(S) is run through the entire episode\n(6 is a hyperparameter of the algorithm). The perturbation vector plays both role of performing\nstructured exploration and providing the gradient estimate of the cumulative reward with respect to w.\nThe algorithm performs a minibatch update at the end of the episode. The second loop in Algorithm\nVF =(VwF)\u00ae <.\nsradient with respect to 6 when the network input is (s\u201d, a\u201d) and the backward step uses z as input\nwhere the expectation is taken over the vector u sampled on the unit sphere (Nemirovsky et al.|/1982\n\nchapter 9.3). The constant 4 is absorbed by learning rates, so we ignore it in the ae Giver\n\na (state, action) pair (s, a) and the observed cumulative reward 7!\u2019 for an episode of length \u00a2, an\n\nestimate of the gradient of the expected cumulative reward with respect to w is thus 7\u201c. In practice\nSetch phet\n\nwe use u rather than 7!:-'w, which corresponds to the gradient of the average cumulative\nThe deterministic exploration along an episode does not provide any update rule for the parameters of\nthe embedding network, because the randomization is the same for every (state, action) pair. We pro-\npose a heuristic rule to update the parameters 6 of the embedding network, motivated by the following\nremark: given a function (w \u20ac R4,v \u20ac R\u00a2) +> F((w,v)) \u20ac R, we have V,,F = F\u2019((w,v))v and\nVUF = F'((w,v))w. Denoting by & ~ the term-by-term division of vectors (assuming v contains\nonly non-zero values) and \u00a9 the term \u2018by- term multiplication operator, we obtain:\ngation. In practice, we use the sign of =~, to avoid exploding gradients due to the division by\n< GO) + backpropy, (st at) (Meu \u00a9 (signgea5))\n\nw\n6(s*.a\nAlgorithm 1: Zero-order (ZO) backpropagation algorithm\nThe reasoning above is only an intuitive motivation of the update rule (**) of Algorithm[!] becaus\nwe neglected that a single u is sampled for an entire episode. We also neglected the argmax operatiot\nthat chooses the actions. Nonetheless, considering (**) as a crude approximation to some rea\nestimator of the gradient seems to work very well in practice, as we shall see in our experiment:\nFinally, we use Adagrad to update the parameters of the different layers. W\nfound the use of Adagrad\u2019s update scheme fairly important in practice, compared to other approache\n\nsuch as e.g. RMSProp (Tieleman & Hinton}/2012), even though RMSProp tended to work slight\nbetter with Q-learning or REINFORCE in our experiments.\nWe use Torch7 (Collobert et al.| for all our experiments. We connect our Torch code and model:\nto StarCraft through a socket server, as described in (2016). We ran experiment:\nwith deep Q networks (DQN) (Mnih et al.|2013), policy gradient (PG) (Williams| (detailec\nin Appendix[Ap, and zero order (ZO). We did an extensive hyper-parameters search, in particulat\nover \u20ac (for epsilon-greedy exploration in DQN), 7 (for policy gradient\u2019s softmax), learning rates\noptimization methods, RL algorithms variants, and potential annealings (detailed Appendix|E)."}, {"section_index": "5", "section_name": "7.2 BASELINE HEURISTICS", "section_text": "As all the results that we report are against the built-in Al, we compare our win rates to the ones of\nbaseline heuristics. Some of these heuristics often perform the micromanagement in full-fledged\n\nStarCraft bots (Ontan\u00e9n et al.|{2013), and are the basis of heuristic search (Churchill et al. {2012).\n\nThe baselines are the following:\nFigure 2: Example of the training uncertainty (one standard deviation) on 5 different initialization for\nDQN (left) and zero-order (right) on the m5v5 scenario.\nary Win rate ary Win rate\n\nog\n\nog\n\nol\n\na2]\n\n0.9]\n\n\u00b0 10000 720000 30000 \u201c40000 50000 \u00b0 10000 720000 30000 \u201c40000 50000\nrandom no change (rand_nc): select a random target for each of our units and do not change thi\ntarget before it dies (or our unit dies). This spreads damage over several enemy units, but whe1\nthere are collisions, it may make our units to move a lot to be in range of their target.\n\nnoop: send no action. In this case, the built-in AI will control our units, so this exhibit the\nsymmetry (or not!) of a given scenario. As we are always in a defensive position, with the enem:\ncommanded to walk towards us, all other things considered equal, it should be easier for the\ndefending built-in AI than for the attacking one. Our models cannot send a noop command.\n\nclosest (c): each of our units targets the enemy unit closest to it. This is not a bad heuristic a:\nenemy units formation will make it so that several of our units have the same opponent unit a:\nclosest unit (some form of focus firing), but not all of them (no overkill). It is also quite robust fo\nmelee units (e.g. Zealots) as it means they spend less time moving and more time attacking.\n\nweakest closest (wc): each of our units targets the weakest enemy unit. The distance of the enem\nunit to the center of mass of our units is used for tie-breaking. This may overkill.\n\nno overkill no change (nok_nc): same as the weakest closest heuristic, but register the number o\nour units that target each opponent unit, choosing another target to focus fire when it become:\noverkill to keep targeting a given unit. Each of our units keep firing on their target without changin;\n(that would lead to erratic behavior). Our implementation of the \u201cno overkill\u201d component doe:\nnot take all the dynamics of the game into account, and so if our units die without doing thei\nexpected damage on their target, \u201cno overkill\u201d can be detrimental."}, {"section_index": "6", "section_name": "7.3. RESULTS", "section_text": "We can see in Table[2|that m15v16 is at the advantage of our player\u2019s side (nogp is at 81% win rate),\nwhereas w15v17 is hard (c is at 20% win rate). By looking just at the results of the heuristics, we can\nsee that overkill is a problem on m15v16 and w15v17 (nok_nc is better than wc). \u201cAttack closest\u201d (c)\nis approximatively as good as nok_nc at spreading damage, and thus better on m15v16 because there\nare lots of collisions (and attacking the closest unit is going to trigger less movements).\nOverall, the zero order optimization outperforms both DQN and PG (REINFORCE) on most of the\nmaps. The only map on which DQN and PG perform well is m5v5. It seems to be easier to learn\nThe first thing that we looked at were sliding average win rates over 400 battles during training against\nthe built-in AI of the various models. In Figure[2] we can see than DQN is much more dependent\non initialization and variable than zero order (ZO). DQN can unlearn, reach suboptimal plateau, or\noverall need a lot of exploration to start learning (high sample complexity).\nFor all the results that we present in Tables|2]and|3| we ran the models in \u201ctest mode\u201d by making\nthem deterministic. For DQN we remove the epsilon-greedy exploration (set \u00ab = 0), for PG we do\nnot sample from the Gibbs policy but instead take the value-maximizing action, and for ZO we do\nnot add noise to the last layer.\nTable 2: Test win rates over 1000 battles for the training scenarios, for all methods and for heuristic:\nbaselines. The best result for a given map is in bold.\nTable 3: Win rates over 1000 games for out-of-training-domain maps, for all methods. The map on\nwhich this method was trained on is indicated on the left. The best result is in bold, the best result out\nof the reinforcement learning methods is in italics.\ntrain map test map best heuristic DQN PG ZO train map test map best heuristic DQN PG ZO\nml5v16 = m5v5 96 (welc) 96.79 80 ~wl5v17_ ~w5v5 -78 (c) 10.70 .74\nml5v15 97 (c) 27 16 .80 wl5vl3 1.(rand_nc/ec) 1. 99 1.\n\nml18v18 .98(c/noop) .18 .25 .82 wl5v15 95 (c) 87 61 .99\n\nml18v20 .63(noop) .00 .01 .17 wl8v18 -99 (c) 92 56 1.\n\nw18v20 71 (c) 31.24 .76\na focus firing heuristic (e.g. \u201cattack weakest\u201d) by identifying and locking on a feature, than to alsc\nlearn not to \u201coverkill\u201d. We interpret the learned behaviors in Appendix |F]\nWe then studied how well a model trained on one map performs on maps with a different number o:\nunits, to test generalization. Table[3]contains the results for this experiment. We observe that DQN\nperforms the best on m5v5 when trained on m15v16, because it learned a simpler (but more efficien\non m5v5) heuristic. \u201cNoop\u201d and \u201cattack closest\u201d are quite good with the large Marines map becaus\u00a2\nthey generate less moves (and less collisions). Overall, ZO is consistently significantly better thar\nother RL algorithms on these generalization tasks, even though it does not reach an optimal strategy\nWe also played the best model on each map against each other. We modify the maps in this case such\nthat they are all symmetric, but with the same army composition. Table|4]shows the results for this\nexperiment. It seems that PG and DQN learned very different strategies on wXVY, DQN beats PG\nconsistently when trained on w15v17, while the PG model trained on w15v15 has an edge over DON\nOverall, ZO comes out ahead in every match-up except for m5v5, often by a significant margin.\nThis paper presents two main contributions. First, it establishes StarCraft micromanagement scenarios\nas complex benchmarks for reinforcement learning: with durative actions, delayed rewards, and large\naction spaces making random exploration infeasible. Second, it introduces a new reinforcement\nlearning algorithm that performs better than prior work (DQN, PG) for discrete action spaces in\nthese micromanagement scenarios, with robust training (see Figure|2) and episodically consistent\nexploration (exploring in the policy space).\nThis work leaves several doors open and calls for future work. Simpler embedding models of state and\nactions, and variants of the model presented here, have been tried, none of which produced efficient\nunits movement (e.g. taking a unit out of the fight when its hit points are low). There is ongoing\nTable 4: Win rates over 2000 games against each other.\nheuristics RL\n\nmap rand_nc noop c we nok_nc DQN PG ZO\ndragoons_zealots 14 49 67 83 50 61 .69 .90\nm5v5 AD 84 94 96 83 99 92 1.\nm15v16 .00 81 81 10 68 130.19 .79\n\nwl5v17 19 10 20 02. 12 16.14 49\ntrained on dragoons_zealots ml15v1l6 m5v5 wl5v15 wl5v17\n\ntested on dragoons_zealots ml5v15 ml8v18 mS5v5 wlSvl5 wl8vl8 wlSvl5 wl8vl18\n\n\u00b0>G > DQN 74 46 AT AD 61 69 09 04\nZO > PG 76 82 79 44 82 17 98 99\nZO > DQN 93 85 86 39 88 90 79 80\nwork on convolutional networks based models that conserve the 2D geometry of the game (whil\u00ab\nembedding the discrete components of the state and actions). The zero order optimization techniqu:\npresented here should be studied more in depth, and empirically evaluated on domains other that\nStarCraft (e.g. Atari). As for StarCraft scenarios specifically, the subsequent experiments will includ\nself-play in training, multi-map training (more generic models), and more complex scenarios whicl\ninclude several types of advanced units with actions other than move and attack. Finally, the goal o\nplaying full games of StarCraft should not get lost, so future scenarios would also include the action:\nof \u201crecruiting\u201d units (deciding which types of unit to use), and how to best make use of them."}, {"section_index": "7", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Y-Lan Boureau, Antoine Bordes, Florent Perronnin, Dave Churchill, L\u00e9on Bottou and\nAlexander Miller for helpful discussions and feedback about this work and earlier versions of the paper\nWe thank Timoth\u00e9e Lacroix and Alex Auvolat for technical contributions to our StarCraft/Torch\n\nbridge. We thank Davide Cavalca for his support on Windows virtual machines in our cluster\nenvironment."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "David Abel, Alekh Agarwal, Fernando Diaz, Akshay Krishnamurthy, and Robert E Schapire. Explorator\ngradient boosting for reinforcement learning in complex domains. arXiv preprint arXiv: 1603.04119, 2016.\nDaniel S Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. The complexity of decentralize\ncontrol of markov decision processes. Mathematics of operations research, 27(4):819-840, 2002.\nLucian Busoniu, Robert Babuska, and Bart De Schutter. A comprehensive survey of multiagent reinforcement\nlearning. IEEE Transactions on Systems, Man, And Cybernetics-Part C: Applications and Reviews, 38 (2),\n2008, 2008.\nDavid Churchill, Abdallah Saffidine, and Michael Buro. Fast heuristic search for rts game combat scenarios. In\nAIIDE, 2012.\nDjork-Arn\u00e9 Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning b:\nexponential linear units (elus). arXiv preprint arXiv: 1511.07289, 2015.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochasti\noptimization. Journal of Machine Learning Research, 12(Jul):2121\u20142159, 2011.\nSaeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic progran\nming. SIAM Journal on Optimization, 23(4):2341\u20142368, 2013.\nMohammad Ghavamzadeh, Sridhar Mahadevan, and Rajbala Makar. Hierarchical multi-agent reinforcemen\nlearning. Autonomous Agents and Multi-Agent Systems, 13(2):197\u2014229, 2006.\nJunling Hu and Michael P Wellman. Multiagent reinforcement learning: theoretical framework and an algorithm\nIn JCML. volume 98. pp. 242-250. 1998.\n\\ndrew G Barto, Richard S Sutton, and Charles W Anderson. Neuronlike adaptive elements that can solv\ndifficult learning control problems. JEEE transactions on systems, man, and cybernetics, (5):834\u2014846, 198:\nRonan Collobert, Koray Kavukcuoglu, and Cl\u00e9ment Farabet. Torch7: A matlab-like environment for machine\nlearning. In BigLearn, NIPS Workshop, number EPFL-CONF- 192376, 2011.\nMarc Peter Deisenroth, Gerhard Neumann, and Jan Peters. A survey on policy search for robotics. Foundations\nand Trends in Robotics, 2(1-2):1\u2014142, 2013.\nohn C Duchi, Michael I Jordan, Martin J Wainwright, and Andre Wibisono. Optimal rates for zero-order convex\noptimization: the power of two function evaluations. arXiv preprint arXiv: 1312.2139, 2013.\nsylvain Gelly and Yizao Wang. Exploration exploitation in go: Uct for monte-carlo go. In NIPS: Neural\nInformation Processing Systems Conference On-line trading of Exploration and Exploitation Workshop, 2006\nMatthew Hausknecht and Peter Stone. Deep reinforcement learning in parameterized action space. arXiv\npreprint arXiv: 1511.04143, 2015.\nJack Kiefer, Jacob Wolfowitz, et al. Stochastic estimation of the maximum of a regression function. The Annal\nof Mathematical Statistics, 23(3):462-466, 1952.\nMichael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings 0\nthe eleventh international conference on machine learning, volume 157, pp. 157-163, 1994.\nFrancis Maes, Ludovic Denoyer, and Patrick Gallinari. Structured prediction with reinforcement learning.\nMachine learning, 77(2-3):271-301, 2009.\nBhaskara Marthi, Stuart J Russell, David Latham, and Carlos Guestrin. Concurrent hierarchical reinforcemen\nlearning. In LJCAI, pp. 779-785, 2005.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and\nMartin Riedmiller. Playing atari with deep reinforcement learning. In Proceedings of NIPS, 2013.\nJunhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perceptior\nand action in minecraft. arXiv preprint arXiv: 1605.09128, 2016.\nSantiago Ontan6n, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David Churchill, and Mike Preuss. A\nsurvey of real-time strategy game ai research and competition in starcraft. Computational Intelligence and A\\\nin Games, IEEE Transactions on, 5(4):293-311, 2013.\nFrank Sehnke, Christian Osendorfer, Thomas Riickstie8, Alex Graves, Jan Peters, and Jiirgen Schmidhuber\nPolicy gradients with parameter-based exploration for control. In Artificial Neural Networks-ICANN 2008, pp\n387-396. Springer, 2008.\nFrank Sehnke, Christian Osendorfer, Thomas Riickstie8, Alex Graves, Jan Peters, and Jiirgen Schmidhuber\nParameter-exploring policy gradients. Neural Networks, 23(4):551\u2014559, 2010.\nDavid Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministi\npolicy gradient algorithms. In JCML, 2014.\nPeter Stone and Manuela Veloso. Team-partitioned, opaque-transition reinforcement learning. In Proceedings o,\nthe third annual conference on Autonomous Agents, pp. 206-212. ACM, 1999.\nRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1998.\nRichard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for\nreinforcement learning with function approximation. In NIJPS, volume 99, pp. 1057-1063, 1999.\nDan Klein, David Burkett, David Hall, Taylor-Kirkpatrick Berk, John Blitzer, John DeNero, Haomiao Huang\nsiming Liu, Sushil J Louis, and Christopher Ballinger. Evolving effective micro behaviors in rts game. In\nComputational Intelligence and Games (CIG), 2014 IEEE Conference on, pp. 1-8. IEEE, 2014.\nShie Mannor, Reuven Y Rubinstein, and Yohai Gat. The cross entropy method for fast policy search. In JCML\npp. 512\u2014519. 2003.\nan Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped\ndan. arXiv preprint arXiv: 1602.04621, 2016a.\n[an Osband, Benjamin Van Roy, and Zheng Wen. Generalization and exploration via randomized value functions.\nIn Proceedings of The 33rd International Conference on Machine Learning. pp. 2377\u20142386. 2016b.\nJames C Spall. A one-measurement form of simultaneous perturbation stochastic approximation. Automatica\n33(1):109-112, 1997.\nSainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. Learning multiagent communication with backpropagation\narXiv preprint arXiv: 1605.07736, 2016.\nRichard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9-44\n1988.\nGabriel Synnaeve and Pierre Bessiere. A bayesian model for rts units control applied to starcraft. In Computa\ntional Intelligence and Games (CIG), 2011 IEEE Conference on. pp. 190-196. IEEE, 2011.\nstvan Szita and Andras Lorincz. Learning tetris using the noisy cross-entropy method. Neural computation, 1\n(12):2936-2941, 2006.\nMing Tan. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the tentl\ninternational conference on machine learning, pp. 330-337, 1993.\nGerald Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58\u2014-68, 1995\nGerald Tesauro. Extending q-learning to general adaptive multi-agent systems. In Advances in neural informatio1\nprocessing systems, pp. None, 2003.\nWilliam R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidenc\nof two samples. Biometrika, 25(3/4):285\u2014294, 1933.\nI. Tieleman and G. Hinton. Lecture 6.5\u2014RmsProp: Divide the gradient by a running average of its recen\nmagnitude. COURSERA: Neural Networks for Machine Learning, 2012.\nHado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. arXi\npreprint arXiv: 1509.06461, 2015.\n~hristopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992\nStefan Wender and Ian Watson. Applying reinforcement learning to small scale combat in the real-time strategy\ngame starcraft: broodwar. In Computational Intelligence and Games (CIG), 2012 IEEE Conference on, pp.\n402-408. IEEE, 2012.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.\nMachine learning, 8(3-4):229-256, 1992."}, {"section_index": "9", "section_name": "We here briefly describe the two algorithms we use as baseline, Q-learning (Sutton & Barto}|/1998\nand REINFORCE (Williams}|T992).", "section_text": "Q-learning The Q-learning algorithm in the finite-horizon setting learns an action-value function\nQ by solving the Bellman equation\nVs \u20ac S,Va \u20ac A(s), Qi(s, a) = 2 Pls'ls.a) a)(r(s, 8\u2019) + omar, Quaals' ,a\u2019))\nTraining is usually carried out by collecting traces (s', a\u2018, s\u2018t!,r'+!),_1 p_1 using e-greedy\nexploration: at state s and stage t, an action in argmax,\u00a2_4(s) Q:(s, a) is chosen with probability 1\u2014\u00ab\nor an action in A(s) is chosen uniformly at random with probability \u00a2. In practice, we use stationary\nQ functions (i.e., Q; = Q:41), which are neural networks, as described in Section [5] Training\nis carried out using the standard online update rule for Q learning with function approximatior\n\n(see (Mnih et al.| (2015) for DQN), which we apply in mini-batches (hyper-parameters are detailed ir\nAppendix|E).\nREINFORCE. me algorithm REINFORCE belongs to the family of policy gradient algorithms\n. Given a stochastic policy 7 parameterized by O, learning is carried out by\ngenerating ae 8 fighgttl pti). coe T-1 by following the current policy. Then, stochastic\ngradient updates are performed, using the gradient estimate:\nT\nrs s'-T)V@ log(te(a'|s\")).\n\nt=1\nVe use a Gibbs policy (with temperature parameter 7) as the stochastic policy"}, {"section_index": "10", "section_name": "B THE GREEDY MDP", "section_text": "A natural way to define the greedy MDP (Section [4) is to define the set of atomic actions of the\ngreedy policy as all possible (unit, command) pairs for the units whose command is still not decided\nThis would lead to an inference with quadratic complexity with respect to the number of units, which\nis undesirable.\nWe settled on iteratively choosing a unit, then a command to apply to that unit, which yields ar\nalgorithm with 2|s| steps for state s, linear in the number of units. Since the commands are execute\nconcurrently by the environment after all commands have been decided, the cumulative reward doe:\nnot depend on the order in which we choose the units, for instance: uniformly at random among\nremaining units. More formally, using the notation a, to denote the k first (unit, command) pair:\nof an action a (with the convention a, , \u2014 (h). the state snace S of the oreedv MDP is defined bv\n\u201cThe policy may not be determistic if we break ties randomly in the argmax.\nS,Va \u20ac A(s), Q:(s,a) = Ss p(s\u2019|s,a)(r(s, s)+ ymax Qts1(s',a\u2019)) ,\n\nBCS /EA(s')\nwhere Q, is the state-action value function at stage t of an episode, and Q(s, a) = 0 by convention.\nQ1(s, a) is also 0 whenever a terminal state is reached, and transitions from a terminal state only go\nto the same terminal state.\nThis training phase is distinct from the test phase, in which we record the average cumulative reward\nof the deterministic policy?|s + argmax,< 4.) Q(s, a).\nexp(\u00a2e (a, s)/T)\n\nTo (als) = Voc A(s ) exp (ge (0, \u00a7 s)/T)\u2019\nwhere \u00a2@ is a neural network with paramters \u00a9 that gives a real-valued score to each (state, action)\npair. For testing, we use the deterministic policy me(s) = argmax.- 4,.) de(a,s).\nS= {(5, 1.4, Uk41) | sE\u20acS,0<k < |s|,a = ((ur, 1), -, (Us, \u00a2s|)) \u20ac A(s)}.\nr((S,1,.h\u2014-1; Uk), (8, 41,.h;Uk41)) =0, and F((s, a1, )5)\u2014~1, Uys), (8,0, u\u2019)) = r(s, 8\u2019)\nF((8,Q1..b-1, Uk), (8, 41..b, Uk+1)) =0, and F((s, a4. )5)-1, UYs}), (8,9, u\u2019)) = r(s, 8\u00b0\nIt can be shown that an optimal policy for this greedy MDP chooses actions that are optimal for the\noriginal MDP, because the immediate reward in the original MDP does not depend on the order in\nwhich the actions are taken. This result only applies if the family of policies has enough capacity. In\npractice, some ordering may be easier to learn than others, but we did not investigate this issue because\nthe gain, in terms of computation time, of the random ordering was critical for the experiments.\nThe normalized rewards (from Section|4) maintain the invariant n\u2019-7 = Ie} but more importantly,\nthe normalization can be applied to the Bellman equation (2), which becomes\nVs \u20ac S,Va \u20ac A(s), Q(s, a) ee (r(s, 8\u2019) 4 2(s') max Q(s',a\u2019)).\n\nies \u20acA(s\u2019)\nThis normalization does not change the optimal policy because it maintains the invariant that the\nexpected normalized cumulative reward from a given state s to the end of an episode (by following\nthe optimal deterministic policy) is the expected cumulative reward from this s divided by a valuc\nthat depends only on s.\nThe stochastic gradient updates for Q-learning can easily be modified accordingly, as well as the\ngradient estimate in REINFORCE (3) in which we replace 7 by n."}, {"section_index": "11", "section_name": "D STARCRAFT SPECIFICS", "section_text": "We advocate that using existing video games for RL experiments is interesting because the simulators\nare oftentimes complex, and we (the AI programmers) do not have control about the source code of\nthe simulator. In RTS games like StarCraft, we do not have access to a simulator (and writing one\nwould be a daunting task), so we cannot use (Monte Carlo) tree search (Gelly & Wang] |2006)\n\neven less so in the setting of full games (Ontan\u00e9n et al.|[2013). In this paper, we consider the problem\nof micromanagement scenarios, a subset of full RTS play. Micromanagement is about making good\nuse of a given set of units in an RTS game. Units have different features, like range, cooldown\nhit points (health), attack power, move speed, collision box etc. These numerous features and the\ndynamics of the game advantage player that take the right actions at the right times. Specifically for\nthe game(s) StarCraft, for which there are professional players, very good competitive players and\nprofessional players perform more than 300 actions per minute during intense battles.\nWe ran all our experiments on simple scenarios of battles of an RTS game: StarCraft: Broodwar.\nThese scenarios can be considered small scale for StarCraft, but they already deem challenging for\nexisting RL approaches. The joint action space is in @((#4commands per unit)#\"\u2122\"s), with a peak\nnumber of units of about 400 (Synnaeve & Bessiere||2011). For an example scenario of 15 units\n(that we control) against 16 enemy units, even while reducing the action space to \"atomic\" actions\n(surrounding moves, and attacks), we obtain 24 (8+16) possible discrete actions per unit for our\ncontroller to choose from (241\u00b0 actions total) at the beginning of the battle. Battles last for tens of\nseconds, with durative actions, simultaneous moves, and at 24 frames per second. The strategies that\nwe need to learn consist in coordinated sets of actions that may need to be repeated, e.g. focus firing\nwithout overkill. We use a featurization that gives access onlv to the state from the same. we do not\nThe action space A(S) of each state 5 \u20ac S is constant and equal to the set of commands C. Moreover.\nfor each state s of the original MDP, any action a = ((u1, C1), -.-; (us|, \u00a2]s|) \u20ac A(s), the transition\nprobabilities 6 in the greedy MDP are defined by\ni\n\nVk \u20ac {0,...,|8] \u2014 1}, A((s, a1..6, Ue+1)|(S, 01..b-1, Uk), Ck) = plo\n\n1 t\nand Ws\u2019 \u20ac S,Vu' \u20acU(s'), A((s\u2019,0,u\")|(s, a1..js|-15 U)s|)> es) = Pala |s,@).\nFinally, using the same notation as above, the reward function 7 between states that represent\nintermediate steps of the algorithm is 0 and the last unit to play receives the reward:\nFor most of these tasks (\u201cmaps\u201d), the number of units that our RL agent has to consider changes\nover an episode (a battle), as do its number of actions. The fact that we are playing in this specific\nadversarial environment is that if the units do not follow a coherent strategy for a sufficient amount o!\ntime, they will suffer an unrecoverable loss, and the game will be in a state of the game where the\nunits will die very rapidly and make little damage, independently of how they play \u2014 a state that is\nmostly useless for learning.\nOur tasks (\u201cmaps\u201d) represent battles with homogeneous types of units, or with little diversity (2 types\nof unit for each of the players). For instance, they may use a unit of type Marine, that is one soldie:\nwith 40 hit points, an average move speed, an average range (approximately 10 times its collisior\nsize), 15 frames of cooldown, 6 of attack power of normal damage type (so a damage per second o!\n9.6 hit points per second, on a unit without armor). On symmetric and/or monotyped maps, strategie:\nthat are required to win (on average) are \u201cfocus firing\u201d, without overkill (not more units targeting <\nunit than what is needed to kill it). For perfect win rates, some maps may require that the AI moves\nits units out from the focus firing of the opponent."}, {"section_index": "12", "section_name": "E HYPER-PARAMETERS", "section_text": "Taking an action on every frame (24 times per second at the speed at which human play StarCraft) for\nevery unit would spam the game needlessly, and it would actually prevent the units from moving?\nWe take actions for all units synchronously on the same frame, even skip_frames frames. We\ntried several values of this hyper-parameter (5, 7, 9, 11, 13, 17) and we only saw smooth changes in\nperformance. We ran all the following experiments with a skip_frames of 9 (meaning that we\ntake about 2.6 actions per unit per second).We also report the strongest numbers for the baselines\nover all these skip frames. We optimize all the models after each battle (episode), with RMSProp\n(momentum 0.99 or 0.95), except for zero-order for which we optimized with Adagrad (Adagrad\ndid not seem to work better for DQN nor REINFORCE). In any case, the learning rate was chosen\namong {10~2, 10-3, 10-4}.\nFor all methods, we tried experience replay, either with episodes (battles) as batches (of sizes 20, 50\n100), or additionally with random batches of (82, az, 7441, 8141, terminal?) quintuplets in the case\nof Q-learning, it did not seem to help compared to batching with the last battle. So, for consistency\nwe only present results where the training batches consisted of the last episode (battle).\nFor REINFORCE we searched over 7 \u20ac {0.1, 0.5, 1, 10}.\nFor zero-order, we tried 5 \u20ac {0.1,0.01, 0.001}.\nWe visually inspected the model\u2019s performance on large battles. On the larger Marines map (m15v16)\nDQN learned to focus fire. Because this map has many units, focus firing leads to units bumping\ninto each other to try to focus on a single unit. The PG player seemed to have a policy that attacks\nthe closest marine, though it doesn\u2019t do a good job switching targets. The Marines that are not in\nrange often bump into each other. Our zero order optimization learns a hybrid between focus firing\neveral actions are durative, including moves. Moves have a dynamic consisting of per-unit-type\nturn rate, max speed, and acceleration parameters.\nFor Q-learning (DQN), we tried two schemes of annealing for epsilon greedy, \u00ab = Wrest\n\nwith \u00a2 the optimization batch, and \u00ab = max(0.01, =*), Both with \u20ac) \u20ac {0.1, 1}, and respectively\n\neat\n\u20aca \u20ac {0,9} and e, \u20ac {10-\u00b0, 10-4, 10-3}. We found that the first works marginally better and used\nthat in the subsequent experiments with \u20ac) = 1 and \u20ac, = 1 for most of the scenarios. We also used\nDouble DQN as in (thus implemented as target DQN). For the target/double\nnetwork, we used a lag 0: optimizations, thus a lag of 100 battles in all the following experiments.\nAccording to our initial runs/sweep, it seems to slightly help for some cases of over-estimation of the\nQ value.\nand attacking the closest unit. Units would switch to other units in range if possible, but still focus\non specific targets. This leads to most Marines attacking constantly, as well as focus firing when\nthey can. However, the learned strategy was not perfected, since Marines would still split their fire\noccasionally when left with few units.\nIn the Wraiths map (w15v17), the DQN player\u2019s strategy was hard to decipher. The most likely\nexplanation is that they tried to attack the closest target, though it is likely the algorithm did not\nconverge to a specific strategy. The PG player learned to focus fire. However, because it only takes 6\nWraiths to kill another, 9 actions are \"wasted\" during the focus firing (at the beginning of the fight.\nwhen all our units are alive). Our zero order player learns that focusing only on one enemy is not\ngood, but it does not learn how many attacks are necessary. This leads to a much higher win rate, but\nthe player still assigns more than 6 Wraiths to an enemy target (maybe for robustness to the loss of\none of our units), and occasionally will not focus fire when only a few Wraiths are remaining. This is\nsimilar to what the zero order player learned during the Marines scenario."}]
By14kuqxx
[{"section_index": "0", "section_name": "BIT-PRAGMATIC DEEP NEURAL NETWORK COMPUT-\nING", "section_text": "orge, juddpatr, delmasll,sayeh,moshovos}@ece.utoronto.ca\nWe quantify a source of ineffectual computations when processing the multiplica-\ntions of the convolutional layers in Deep Neural Networks (DNNs) and propose\nPragmatic (PRA), an architecture that exploits it improving performance and en-\nergy efficiency. The source of these ineffectual computations is best understood in\nthe context of conventional multipliers which generate internally multiple terms.\nthat is, products of the multiplicand and powers of two, which added together pro-\nduce the final product . At runtime, many of these terms are zero\nas they are generated when the multiplicand is combined with the zero-bits of\nthe multiplicator. While conventional bit-parallel multipliers calculate all terms\nin parallel to reduce individual product latency, PRA calculates only the non-\nzero terms resulting in a design whose execution time for convolutional layers\nis ideally proportional to the number of activation bits that are 1. Measurements\ndemonstrate that for the convolutional layers on Convolutional Neural Networks\nand during inference, PRA improves performance by 4.3x over the DaDiaNao\n(DaDN) accelerator and by 4.5x when DaDN uses an 8-bit\nquantized representation . DaDN was reported to be 300x faster\nthan commodity graphics processo"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep Neural Network (DNN) hi\n\ne typically uses either 16-bit fixed-point (2014)\n\nor quantized 8-bit numbers|W% 6) and bit-parallel compute units. For convolutional layers\nthat account for most of the execu ime in Convolutional Neural Networks (CNNs) during image\nclassification, these bit-parallel engines perform many ineffectual computations. Specifically, these\nlayers perform several several inner products, where multiple pairs of weights and activations are\nmultiplied and then reduced into an output activation. Any time a zero bit of an activation or a weigh\nis multiplied it adds nothing to the final output activations. These ineffectual bits are introduced by\nthe conventional positional number representation and if avoided it would take even less time tc\ncalculate each product improving energy and performance. As a first step, this work targets the\nineffectual bits of activations only. Section [2] shows that in recent image classification networks\n93% and 69% of activation bit and weight products are ineffectual when using respectively 16-bi\nfixed-point and 8-bit quantized representations.\nThis work presents Pragmatic (PRA) a DNN accelerator whose goal is to process only the essential\n(non-zero) bits of the input activations PRA employs the following four key techniques: 1) on-the-\nfly conversion of activations from a storage representation (e.g., conventional positional numbet\nor quantized) into an explicit representation of the essential bits only, 2) bit-serial activation/bit-\nparallel weight processing, an idea borrowed from STR but adapted for the\naforementioned representation, 3) judicious SIMD (single instruction multiple data) lane grouping\nto maintain wide memory accesses and to avoid fragmenting and enlarging the multi-MB on-chip\nweight memories (Sections [5]and ), and 4) computation re-arrangement (Sectioi to reduce\ndatapath area. All evaluated PRA variants maintain wide memory accesses and use highly-parallel\nSIMD-style (single-instruction multiple-data) computational units. PRA introduces an additional\ndimension upon which software can improve performance and enerevy efficiency by controlling ac-\nJorge Albericio , Patric Judd, Alberto Delmas Lascorz, Sayeh Sharify & Andreas Moshovos\n\nSlactrical and Camniter Bnoineering"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Figure 1: Sources of ineffectual computation with conventional positional representation and fixed:\nlength hardware precision.\ntivation values judiciously in order to reduce their essential bit content while maintaining accuracy.\nThis work explores such an alternative, where the software explicitly communicates how many pre-\nfix and suffix bits to discard after each layer.\nExperimental measurements with recent CNNs for image classification demonstrate that most\nstraightforward PRA variant, boosts average performance for the convolutional layers to 2.59x over\nthe state-of-the-art DaDN accelerator. Pragmatic\u2019s average energy efficiency is 1.48x over DaDN\nand its area overhead is 1.35x. Another variant further boosts performance to 3.1x over DaDN at\nthe expense of an additional 0.7% area."}, {"section_index": "3", "section_name": "2 MOTIVATION", "section_text": "Let us assume a p-bit bit-parallel multiplier using a straightforward implementation of the \u201cShift\nand Add\u201d algorithm where n x s is calculated as )7?_, nj; - (s < i), where n; the i-th bit of n. The\nmultiplier computes p terms, each a product of s and of a bit of n, and adds them to produce the final\nresult. The terms and their sum can be calculated concurrently to reduce latency {Wallace] (1964).\nWith such a hardware arrangement there are two sources of ineffectual computations that resul\nfrom: 1) an Excess of Precision (EoP), and 2) Lack of Explicitness (LoE). Figure[I]shows an example\nillustrating these sources with a bit-parallel multiplier using an 8-bit unsigned fixed-point numbe:\nwith 4 fractional and 4 integer bits. While 10.101(2) requires just five bits, our 8-bit bit-paralle\nmultiplier will zero-extend it with two prefix and one suffix bits. This is an example of EoP and i:\ndue to the fixed-precision hardware. Two additional ineffectual bits appear at positions 1 and -2 as <\nresult of LoE in the positional number representation. In total, five ineffectual bits will be processec\ngenerating five ineffectual terms.\nOur number could be represented with an explicit list of its three constituent powers of 2: (1,-1,-\n3). While such a representation may require more bits and thus be undesirable for storage, coupled\nwith the abundant parallelism that is present in DNNs layers, it provides an opportunity to revisit\nhardware design improving performance and energy efficiency.\nTable[5| reports the essential bit content of the activation stream of recent CNNs for two commonly\n\nused fixed length representations: 1) 16-bit fixed-point of DaDianNao|Chen et al.| , 2) 8-bi\nquantized of Tensorflow|Warden] (2016). The essential bit content is the average num non-zerc\n\nbits that are 1. Two measurements are presented per representation: over all neuron values (\u201cAll\u201d)\nand over the non-zero neurons (\u201cNZ\u201d) as accelerators that can skip zero activations for fixed-poin\n\nrepresentations have been recently proposed|Han et al. (2016); Albericio et al. 2016).\nWhen considering all activations, the essential bit-content is at most 12.7% and 38.4% for the fixed:\npoint and the quantized representations respectively. Even when considering the non-zero activa:\ntions the essential bit content remains well below 50% suggesting that the potential exists to improve\nperformance and energy efficiency over approaches that target zero valued activations only.\nThis section illustrates the idea behind Pragmatic via a simplified example.\nBit-Parallel Hardware Precision\nsae\n\nRequired\nprefix precision __suffix\n\n[0] 1]o}1 [0] 1 [0\n\nEssential bits\n(1,-1,-3)\nTable 1: Average fraction of non-zero bits per activation for two fixed-length representations: 16-bit\nfixed-point, and 8-bit quantized. All: over all activations. NZ: over non-zero activation only.\nFigure 2: An Example Illustrating How Pragmatic Skips Ineffectual Activation Bits Yet Exceedin;\nthe Performance of a Bit-Parallel Engine\nThe bit-parallel unit of Figure [2a] multiplies two activations with their respective weights and via\nan adder reduces the two products. The unit reads all activation and weight, (no = 001(2),n1 =\n010(2)) and (s9 = 001 (2), $1 = 111(2)) respectively in a single cycle. As a result, the two sources\nof inefficiency EoP and LoE manifest here: no and n, are represented using 3 bits instead of 2\nrespectively due to EoP. Even in 2 bits, they each contain a zero bit due to LoE. As a result, four\nineffectual terms are processed when using standard multipliers such as those derived from the Shift\nand Add algorithm. In general, given N activation and weight pairs, this unit will take [N/2] cycles\nto process them regardless of their precision and the essential bit content of the activations.\nFigure [2b] shows a simplified PRA engine. In this example, activations are no longer represented as\nvectors of bits but as vectors of offsets of the essential bits. For example, activation no = 001, is\nrepresented as ong = (0), and a activation value of 111(2) would be represented as (2, 1, 0). An out-\nof-band bit (wire) not shown indicates the activation\u2019s end. A shifter per activation uses the offsets to\neffectively multiply the corresponding weight with the respective power of 2 before passing it to the\nadder tree. As a result, PRA processes only the non-zero terms avoiding all ineffectual computations\nthat were due to EoP or LoE. To match the throughput of the bit-parallel engine of Figure[2a] we take\nadvantage of weight reuse and processes multiple activations groups in parallel. In this example, six\nactivations (no = 001 (2),n1 = 0102), nb = 0002), nr, = 010(2),ng = 010 (2), nf = 000 (2) are\ncombined with the two weights as shown. For this example, PRA would process the six activation\nand weight pairs in a single cycle, a speedup of 3x over the bit-parallel engine.\nPragmatic is demonstrated as a modification of the DaDianNao accelerator (DaDN) proposed by\nChen et al.|Chen et al. (2014). Figure[3a|shows a DaDN tile which processes 16 filters concurrently\ncalculating 16 activation and weight products per filter for a total of 256 products per cycle. To do,\neach cycle the tile accepts 16 weights per filter for total of 256 weight, and 16 input activations. The\ntile multiplies each weight with only one activation whereas each activation is multiplied with 16\nweight, one per filter. The tile reduces the 16 products into a single partial output activation per filter,\nfor a total of 16 partial output activations for the tile. Each DaDN chip comprises 16 such tiles, each\nNeurons\n\nlo MsB\n\nSynapses\n\n'\nLSB ONg [O}\n1} fo oni ff\nn,|1 , ESI\non\u2019 |\n0) ons + i\non\u201d) 2+\u2014- Es\n=}\n5 4 Tx s 0} | _\ns.{t ine\ni i ee of U\n4 4 <=\nBit-Parallel Unit (b) Pragmatic Unit\nNBin NBin\n\nseu . Window | ote TT} 4\nine s Laned eo TF i\nCC \u2014\u2014\u2014\u2014\n\u2014> oe\nSORA ead re\nI] Window! Ottset = : xs)\nLane 15. tress! vi\nSB (DRAM) 1 PiP(0.0) + PIP(15.)\noe sys a } .\nLane a Fitter ret] aime\nFilter Perch Fido... lg\nLane 0 siranse {es |\none tes\nae : oj\nops 7 + {\u2014-NBout\nSyarse Loy 1P15 Fiter ce\nLand} Lane 15, = Th\nFilter 7 7 syraeg pos\nLane 18 i i ms eis PU. 15\nnapa x a\nLane 15] \u2018SB (eDRAM)\n\n(a) (b)\nFigure 3: a) DaDianNao Tile. b) Pragmatic Tile\nprocessing a different set of 16 filters per cycle. Accordingly, each cycle, the whole chip processes\n16 activations and 256 x 16 = 4K weights producing 16 x 16 = 256 partial output activations.\nInternally, each tile has: 1) a synapse buffer (SB) that provides 256 weights per cycle one per synapse\nlane, 2) an input neuron buffey'|(NBin) which provides 16 activations per cycle through 16 neuron\nlanes, and 3) a neuron output buffer (NBout) which accepts 16 partial output activations per cycle. In\nthe tile\u2019s datapath, or the Neural Functional Unit (NFU) each neuron lane is paired with 16 synapse\nlanes one from each filter. Each synapse and neuron lane pair feed a multiplier and an adder tree pet\nfilter lane reduces the 16 per filter products into a partial sum. In all, the filter lanes produce each\na partial sum per cycle, for a total of 16 partial output activations per NFU. Once a full window is\nprocessed, the 16 resulting sums, are fed through a non-linear activation function, f, to produce the\n16 final output activations. The multiplications and reductions needed per cycle are implemented\nvia 256 multipliers one per synapse lane and sixteen 17-input (16 products plus the partial sum from\nNBout) adder trees one per filter lane.\nDaDN\u2019s main goal was minimizing off-chip bandwidth while maximizing on-chip compute utiliza-\ntion. To avoid fetching weights from off-chip, DaDN uses a 2MB eDRAM SB per tile for a total\nof 32MB eDRAM. All inter-layer activations except for the initial input and the final output are\nstored in a 4MB shared central eDRAM Neuron Memory (NM) which is connected via a broadcast\ninterconnect to the 16 NBin buffers. Off-chip accesses are needed only for reading the input image,\nthe filter weights once per layer, and for writing the final output.\nProcessing Approach: Processing starts by reading from external memory the first layer\u2019s weights\nsynapses, and the input image. The weights are distributed over the SBs and the input is stored\ninto NM. Each cycle an input activation brick is broadcast to all units. Each units reads 16 weight\nbricks from its SB and produces a partial output activation brick which it stores in its NBout. Once\ncomputed, the output activations are stored through NBout to NM and then fed back through the\nNBins when processing the next layer. Loading the next set of activations from external memory\ncan be overlapped with the processing of the current layer as necessary.\nused the terms neuron and synapse to refer to activations and weights respectively and\nccordingly. We maintain this terminology for the design\u2019s components.\nTerminology: For clarity, in what follows n(x, y,i) and o(x, y,7) refer to an input and an output\nactivation at coordinates (x, y,i) respectively. The weight of filter f at coordinates (x, y, i) is de-\nnoted as s/(a,y,i). The term brick refers to a set of 16 elements of a 3D activation or weight\narray which are contiguous along the 7 dimension, e.g., n(, y, i)...n(z, y,i + 15). Bricks will be\ndenoted by their origin element with a B subscript, e.g., g(x, y,7). The term pallet refers to a set\nof 16 bricks corresponding to adjacent, using a stride S, windows along the x or y dimensions, e.g.,\nnp(2,y,1)...np(x, y+ 15 x S,2) and will be denoted as np(x, y,i). The number of activations per\nbrick, and bricks per pallet are design parameters."}, {"section_index": "4", "section_name": "5 Pragmatic", "section_text": "PRA\u2019s goal is to process only the essential bits of the activations. To do so PRA a) converts, on-the-\nfly, the input activation representation into one containing only the essential bits, and b) processes\none essential bit per activation and a full 16-bit weight per cycle. Since PRA processes activatior\nbits serially, it may take up to 16 cycles to produce a product of a activation and a weight. To always\nmatch or exceed the performance of the bit-parallel units of DaDN, PRA processes more activations\nconcurrently exploiting the abundant parallelism of the convolutional layers. The remaining of this\nsection describes in turn: 1) an appropriate activation representation, 2) the way PRA calculates\nterms, 3) how multiple terms are processed concurrently to maintain performance on par with DaDN\nin the worst case, and 4) how PRA\u2019s units are supplied with the necessary activations from NM.\nThat is, each cycle, the weight s multiplied by f, the next constituent power two of n, and the resul\nis accumulated. This multiplication can be implemented as a shift and an AND.\nBoosting Compute Bandwidth over DaDN: To match DaDN\u2019s performance PRA needs to pro-\ncess the same number of effectual terms per cycle. Each DaDN tile calculates 256 activation and\nweight products per cycle, or 256 x 16 = 4K terms. While most of these terms will be in practice\nineffectual, to guarantee that PRA always performs as well as DaDN it should process 44 terms pet\ncycle. For the time being let us assume that all activations contain the same number of essential bits.\nso that when processing multiple activations in parallel, all units complete at the same time and thus\ncan proceed with the next set of activations in sync. The next section will relax this constraint.\nSince PRA processes activations bits serially, it produces one term per activation bit and weight pai\nand thus needs to process 4K such pairs concurrently. The choice of which 4/\u00a2 activation bit anc\nweight pairs to process concurrently can adversely affect complexity and performance. For example\nit could force an increase in SB capacity and width, or an increase in NM width, or be ineffective\ndue to unit underutilization given the commonly used layer sizes.\nFortunately, it is possible to avoid increasing the capacity and the width of the SB and the NM\nwhile keeping the units utilized as in DaDN. Specifically, a PRA tile can read 16 weight brick:\nand the equivalent of 256 activation bits as DaDN\u2019s tiles do (DaDN processes 16 16-bit activation:\nor 256 activation bits per cycle). Specifically, as in DaDN, each PRA tile processes 16 weigh\nbricks concurrently, one per filter. However, differently than DaDN where the 16 weight bricks are\ncombined with just one activation brick which is processed bit-parallel, PRA combines each weigh\nbrick with 16 activation bricks, one from each of 16 windows, which are processed bit-serially\nThe same 16 activation bricks are combined with all weight bricks. These activation bricks form\na pallet enabling the same weight brick to be combined with all. For example, in a single cycle <\nPRA title processing filters 0 through 15 could combine combine s(x, y, 0), ...,8}5(a, y, 0) wit\nnA (x,y, 0), rR (x +2, y, 0), ...nRA (x +31, y, 0) assuming a layer with a stride of 2. In this case\ns*(a, y, 2) would be paired with Pk (x,y, 2), n?RA(a + 2,y, 2), ..., nP*A (a + 31, y, 2) to produce\nthe output weights on(x, y, 4) through on(x \u00bb 15, y, 4).\nAs the example illustrates, this approach allows each weight to be combined with one activation per\nwindow whereas in DaDN each weight is combined with one activation only. In total, 256 essential\nactivation bits are processed per cycle and given that there are 256 weights and 16 windows, PRA\nInput Activation Representation: PRA starts with an input activation representation where it\nis straightforward to identify the next essential bit each cycle. One such representation is an\nexplicit list of oneffsets, that is of the constituent powers of two. For example, an activation\nn = 5.5(19) = 0101.1(2) would be represented as n = (2,0,\u20141). In the implementation de-\nscribed herein, activations are stored in 16-bit fixed-point in NM, and converted on-the-fly in the\nPRA representation as they are broadcast to the tiles. A single oneffset is processed per activation\nper cycle. Each oneffset is represented as (pow, eon) where pow is a 4-bit value and eon a sin-\ngle bit which if set indicates the end of a activation. For example, n = 101(2) is represented as\n\nnPRA \u2014 ((0010,0)(0000, 1)).\nCalculating a (weight, activation) product: PRA calculates the product of weight s and activatior\nprocesses 256 x 16 = 4K activation bit and weight pairs, or terms per cycle producing 256 partic\noutput activations, 16 per filter, or 16 partial output activation bricks per cycle.\nSupplying the Inputs: Thus far it was assumed that all input activations have the same numbe\nof essential bits. Under this assumption, all neuron lanes complete processing their terms at th\nsame time, allowing PRA to move on to the next activation pallet and the next set of weight brick\nn one step. This allows PRA to reuse STR\u2019s approach for fetching the next pallet from the single\nsorted NM Briefly, with unit stride the 256 weights would be typically a\nstored in the same NM row or at most over two adjacent NM rows and thus can be fetched in\nmost two cycles. When the stride is more than one, the weights will be spread over multiple row\nand thus multiple cycles will be needed to fetch them all. Fortunately, fetching the next pallet ca\noe overlapped with processing the current one. Accordingly, if it takes N.Mc to access the ne)\npallet from NM, while the current pallet requires Po cycles to process, the next pallet will begi\nosrocessing after max(N Mc, P-) cycles. When NMep > Pe performance is lost waiting for NM\nIn practice it highly unlikely that all activations will have the same number of essential bits. In\ngeneral, each neuron lane if left unrestricted will advance at a different rate. In the worst case, each\nneuron lane may end up needing activations from a different activation brick, thus breaking PRA\u2019s\nability to reuse the same weight brick. This is undesirable if not impractical as it would require\npartitioning and replicating the SB so that 4K unrelated weight could be read per cycle, and it would\nalso increase NM complexity and bandwidth.\nFortunately, these complexities can be avoided with pallet-level neuron lane synchronization where\nall neuron lanes \u201cwait\u201d (a neuron lane that has detected the end of its activation forces zero terms\nwhile waiting) for the one with the most essential bits to finish before proceeding with the nex\npallet. Under this approach it does not matter which bits are essential per activation, only how many\nexist. Since, it is unlikely that most pallets will contain an activation with 16 essential terms, PRA\nwill improve performance over DaDN. Section|5.1|will discuss finer-grain synchronization scheme:\nthat lead to even better performance. Before doing so, however, we detail PRA\u2019s design.\n64\n4\n5 1.14\nynapse fe ry z\n16E\n1 o_nbout\nsynapse BIT] +S *\n16) b4F max! bal\n16 1st << prec\nDone cycle\n\nIshi Tnbout\n64\n\n4\n1.14\nSynapse J | ry a\n16 o_nbout\n1 abl\nSynapse, fPI DR ch.\n16 be max| 5\n16 ist << prec\nPone oycle Tnbout\n\nIshi\nFigure 4: Pragmatic Inner Product Unit."}, {"section_index": "5", "section_name": "5.1 STRUCTURE AND PERFORMANCE AND AREA OPTIMIZATIONS", "section_text": "Figure [3b]shows the Pragmatic tile architecture which comprises an array of 16 x 16 = 256 prag-\nmatic inner product units (PIPs). PIP(i,j) processes an activation oneffset from the i-th window and\nits corresponding weight from the j-th filter. Specifically, all the PIPs along the i-th row receive the\nsame weight brick belonging to the i-th filter and all PIPs along the j-th column receive an oneffset\nfrom each activation from one activation brick belonging to the j-th window. The necessary activa-\nIst stage\n\n8\nynapse J\n16\n\nneg\n\n2nd stage\n\nsynapse, fF\n16&\n\nDone\n\n(a)\n\ncycle 1 cycle 2 cycle 3 cycle 4\ntet\n\nPO BS, 44|) BEd fe i\noTepSBONN () LO, ws =|) &|_ =o\n\nO\nMDTORN ~ Tae, & 4\nNeuron values Onefisets py\n\n(b)\nFigure 5: 2-stage shifting. a) Modified PIP. b) Example: Processing three 9-bit weight and activation\npairs with L = 2.\n8\nynapse FF\n16|E\n\nsynapse, fF\n16&\n\nDone\n\n(a)\n\n2nd stage cycle 1 cycle 2 cycle 3 cycle 4\n; tet\nTOPE BE, ong BEY ds rs i\nr| [....| @epppoom t) o\\\\ 0} f-3|\\ 4 6 z\n. a a &|+)e3|_ S])es\nOMTETOR _|/ | 2-2; & g\nNeuron values Onefisets py\n(b)\nBrick Indexes: 2|\nMax # oneffsets: 4\nBricks: []|\n\n2\n\nB\nSynapses 1\ncorresponding\n\nto brick #\n*.\n\nExtra Synapse\n\n3 cycle 1 i H cycle 3 2 1 eveles 2 cycle 7 2 cycle 8\nq Ut Lt u\no 2itjo ar air 2\n5 2/2) Fa 2/2 2|2 2\no OIG It Oe i\n\u2018SB ify A U SB SB 4 SB\n\nVal\n\nRecisters<...\nFigure 6: Per-column synchronization example: one extra synapse register and 1x2 PIP array capa\nble of processing two windows in parallel. The two numbers per brick show: the first from the top i\nthe brick\u2019s index, (0, 1,2) and (0\u2019, 1\u2019, 2\u2019) for the bricks of the first and second window. The secon\nis the maximum count of oneffsets in its activations, (2, 4, 4) and (5, 2, 2) respectively. The number\nin the registers indicate the index of the corresponding bricks, i.e., a synapse register containing |\nK stores the weights corresponding to activation bricks with indexes K and K\u2019. In cycles 3 to 8\nthicker lines indicate registers being loaded or wires being used.\ntion oneffsets are read from NBin where they have been placed by the Dispatcher and the Oneffset\ngenerators units as Section[5.TJexplains. Every cycle NBin sends 256 oneffsets 16 per window lane.\nAll the PIPs in a column receive the same 16 oneffsets, corresponding to the activations of a sin-\ngle window. When the tile starts to process a new activation pallet, 256 weights are read from SB\nthrough its 256 synapse lanes as in DaDN and are stored in the synapse registers (SR) of each PIP.\nThe weights and oneffsets are then processed by the PIPs.\nDispatcher and Oneffset Generators The Dispatcher reads 16 activation bricks from NM, as ex-\npected by the PRA tiles. The oneffset generator converts their activations on-the-fly to the oneffset\nrepresentation, and broadcasts one oneffset per activation per cycle for a total of 256 oneffsets to\nall titles. Fetching and assembling the 16 activation bricks from NM is akin to fetching words\nwith a stride of S from a cache structure. Once the 16 activation bricks have been collected, 256\noneffset generators operate in parallel to locate and communicate the next oneffset per activation.\nA straightforward 16-bit leading one detector is sufficient. The latency of the oneffset generators\nand the dispatcher can be readily hidden as they can be pipelined as desired overlapping them with\nprocessing in the PRA tiles.\nReducing Title Area with 2-Stage Shifting: Any shift can be performed in two stages as two\nsmaller shifts: a << K =a \u00ab (K'+C) = ((a \u00ab K\u2019) \u00ab C). Thus, to shift and add T weights\nby different offsets Ko,..., A, we can decompose the offsets into sums with a common term C,\ne.g., K; = K/+C. Accordingly, PIP processing can be rearranged using a two stage processing\nwhere the first stage uses a per weight specific offset K/, and the second stage, the common across\nall weights offset C. This arrangement can be used to reduce the width of the weight shifters and\nof the adder tree by sharing one common shifter after the adder tree as Figure[5p shows. A design\nparameter, L, defines the number of bits controlling the weight shifters so that the design can process\noneffsets which differ by less than 2\u201d in a single cycle. This reduces the size of the weight shifters\nand reduces the size of the adder tree to support terms of 16 + 2/ \u2014 1 bits only.\nIncreasing Performance with Per-Column Neuron Lane Synchronization: The pallet neuron\nlane synchronization scheme of Section|5]is one of many possible synchronization schemes. Finer-\ngrain neuron lane synchronization schemes are possible leading to higher performance albeit at a\ncost. Among them, per column neuron lane synchronization is an appealing scheme offering a good\nbalance of cost vs. performance. Here each PIP column operates independently but all the PIPs\nalong the same column synchronize before moving to the next activation brick. Since the PIPs along\nthe same column operate in sync, they all process one set of 16 weight bricks which can be read\nusing the existing SB interface. However, given that different PIP columns operate now out-of-\nPragmatic Inner-Product Unit: Figure |4]shows the PIP internals. Every cycle, 16 weights are\ncombined with their corresponding oneffsets. Each oneffsets controls a shifter effectively multiply-\ning the weight with a power of two. The shifted weights are reduced via the adder tree. An AND\ngate per weight supports the injection of a null terms when necessary. In the most straightforward\ndesign, the oneffsets use 4-bits, each shifter accepts a 16-bit weight and can shift it by up to 15\nbit positions producing a 31-bit output. Finally, the adder tree accepts 31-bit inputs. Section [51]\npresents an enhanced design that requires narrower components improving area and energy.\nsync, the SB would be accessed more frequently and could become a bottleneck. There are two\nconcerns: 1) different PIP columns may need to perform two independent SB reads while there are\nonly one SB port and one common bus connecting the PIP array to the SB, and 2) there will be\nrepeat accesses to SB that will increase SB energy, while the SB is already a major consumer of\nenergy. These concerns are addressed as follows: 1) only one SB access can proceed per cycle thus\na PIP column may need to wait when collisions occur. 2) A set of registers, or synapse set registers\n(SSRs) are introduced in front of the SB each holding a recently read set of 16 weight bricks. Since\nall PIP columns will eventually need the same set of weight bricks, temporarily buffering them\navoids fetching them repeatedly from the SB. Once a weight set has been read into an SSR, it stays\nthere until all PIP columns have copied it (a 4-bit down counter is sufficient for tracking how many\nPIP columns have yet to read the weight set). This policy guarantees that the SB is accessed the\nsame number of times as in DaDN. However, stalls may incur as a PIP column has to be able to\nstore a new set of weights into an SSR when it reads it from the SB. Figure(6]shows an example.\nSince each neuron lane advances independently, in the worst case, the dispatcher may need to fetch\n16 independent activation bricks each from a different pallet. The Dispatcher can buffer those pallets\nto avoid rereading NM, which would, at worst, require a 256 pallet buffer. However, given that the\nnumber SSRs restricts how far apart the PIP columns can be, and since Section[6.2|shows that only\none SSR is sufficient, a two pallet buffer in the dispatcher is all that is needed.\nThis improved generator reduces runs of adjacent oneffsets a...b into pairs of the form a + 1, \u2014b\nSingle oneffsets or gaps inside runs are represented by a positive or negative oneffset, respectively\nFor example a neuron value of 11011 that would normally be encoded with oneffsets (4, 3, 1,0) car\ninstead be represented with (5, \u20143, +2, \u20140) or even more economically with (5, \u20142,\u20140). This i:\n\nequivalent to a Radix-4 Booth encoding and will never emit more than | + 1| oneffsets, where a\nis the neuron precision.\nFinally, booth encoding is conventionally used to reduce the number of cycles needed to perforn\nmultiplication in single shift-and-add multipliers typically reserved for low cost low performance de\nsigns, or to reduce the depth of bit-parallel multipliers. Pragmatic with its 2-stage shifting and judi\ncious lane synchronization enables its practical use in a massively data-parallel accelerator boostin;\nperformance beyond what is possible with bit-parallel units.\nThe Role of Software: PRA enables an additional dimension upon which hardware and software\ncan attempt to further boost performance and energy efficiency, that of controlling the essential\nactivation value content. This work investigates a software guided approach where the precision\nrequirements of each layer are used to zero out a number of prefix and suffix bits at the output of\neach layer. Using the profiling method of Judd et al., (2015), software communicates\nthe precisions needed by each layer as meta-data. The hardware trims the output activations before\nwriting them to NM using AND gates and precision derived bit masks.\nAfter reviewing the experimental methodology the rest of this section is organized as follows: Sec-\ntions |6.T]and [6.2)explore the PRA design space considering respectively single- and 2-stage shift-\ning configurations, and column synchronization. Section|6.2] reports energy efficiency for the best\nFurther Increasing Performance with Improved Oneffset Encoding: Since PIPs in Pragmatic\ncan negate any input term, it is possible to enhance the oneffset generator to generate fewer oneffsets\nfor neuron values containing runs of ones by allowing signed oneffsets\nThis encoding will never produce more oneffsets compared to the baseline encoding. However,\nbecause of the 2-stage shifting, it is possible that this encoding will increase the number of cycles\nneeded. This will happen when the oneffset distribution among the bit groups being processed\ntogether during 2-stage shifting changes.\nThe performance, area and energy efficiency of Pragmatic is compared against DaDN\n2014) and Stripes |Judd et al.|(2016b), two state-of-the-art DNN accelerators. DaDN is the fastest\nbit-parallel accelerator proposed to date that processes all activations regardless of theirs values, and\nSTR improves upon DaDN by exploiting the per layer precision requirements of DNNs. Cnvlutin\n\nimproves upon DaDN by skipping most zero- or near-zero-valued activations|Albericio et al.|(2016),\nhowever, Stripes has been shown to outperform it.\nTable 2: Per convolutional layer activation precision profiles\nconfiguration. Section|6.4] analyzes the contribution of the software provided precisions. Finally\nreports performance for designs using an 8-bit quantized representation.\nMethodology: The same methodology is used for all systems for consistency. A custom cycle\naccurate simulator models execution time. For all systems, computation was scheduled to minimiz\nenergy, which led to the same schedule for all. To estimate power and area, the designs were synthe\nsized with the Synopsis Design Compiler|Synopsys|for a TSMC 65nm library. The NBin and NBou\nSRAM buffers were modeled using CACTI The eDRAM are:\nand energy were modeled with Destiny (2015). To compare against STR, the pe\nlayer numerical representation requirements reported in Table[2]were found using the methodolog}\n\nof Judd et al.\\Judd et al.|(2016b). All PRA configurations studied exploit software provided preci\nection]5. 1]\n\nsions as per S Section|6.4]analyzes the impact of this information on overall performance\nAll performance measurements are for the convolutional layers only which account for more that\n92% of the overall execution time in DaDN (2014). PRA does not affect the executior\ntime of the remaining layers.\nThis section evaluates the single-stage shifting PRA configuration of Section:\nshifting variants of Section|5.1] Section[6.1]reports performance while Sectio\npower. In this section, All PRA systems use pallet synchronization.\n\nand the 2-stag\nreports area anc\nPerformance: Figure|7|shows the performance of STR (leftmost bars) and of PRA variants relative\nto DaDN. The PRA systems are labelled with the number of bits used to operate the first-stage,\nweight shifters, e.g., the weight shifters of \u201c2-bit\u201d , or PRA\u00bb, are able to shift to four bit positions\n(0-3). \u201c4-bit\u201d or PRA4p, is the single-stage Pragmatic, or PRAsingie of Sections 5} 5.1]\nweicht shifters can shift to 16 bit positions (0-15). It has no second stage shifter.\nPRAgingle improves performance by 2.59x on average over DaDN compared to the 1.85 average\nimprovement with STR. Performance improvements over DaDN vary from 2.11 for VGG19 to\n2.97x for VGGM. As expected the 2-stage PRA variants offer slightly lower performance than\nPRAsgingle, however, performance with PRA\u00bb, and PRA3p is always within 0.2% of PRA singie. Even\nPRA\u00bb which does not include any weight shifters outperforms STR by 20% on average. Given a set\nof oneffsets, PRAo, will accommodate the minimum non-zero oneffset per cycle via its second level\nshifter.\nArea and Power: Table |3] shows the absolute and relative to DaDN area and power. Two are\nmeasurements are reported: 1) for the unit excluding the SB, NBin and NBout memory blocks, anc\n2) for the whole chip comprising 16 units and all memory blocks. Since SB and NM dominate\nchip area, the per area area overheads Given the performance advantage of PRA, the area and power:\noverheads are justified. PRA\u00bb\u00bb is particularly appealing as its overall area cost over BASE is only\n1.35x and its power 2.03x while its performance is 2.59x on average. Accordingly, we restric!\nattention to this configuration in the rest of this evaluation.\nPerformance: Figure [8] reports the relative performance for PRA, with column synchronization\nand as a function of the number of SSRs as per Section Configuration PRA3,* \u2018 refers to a\nPer Layer\n\nNetwork Activation Precision in Bits\nAlexNet 9-8-5-5-7\n\nNiN 8-8-8-9-7-8-8-9-9-8-8-8\nGoogLeNet | 10-8-10-9-8-10-9-8-9-10-7\nVGG_M 7-7-1-8-7\n\nVGG_S 7-8-9-7-9\n\nVGG_I9 12-12-12-11-12-10-11-11-13-12-\n\n13-13-13-13-13-13\nStrees obit a Lit ea Zit a St HIE\n\n\u2018Alexnet\u2019 NIN Google VGGM VGGS VGG19. geo\nFigure 7: Pragmatic\u2019s performance relative\nto DaDianNao using 2-stage shifting and per-\npallet synchronization.\nDDN | STR | 0-bit | 1-bit | 2-bit | 3-bit | 4-bit\n\nArea U. 1.55 |] 3.05 ] 3.11 | 3.16 | 3.54 ] 4.41 | 5.75\nA Area U. 1.00 | 1.97 | 2.01 | 2.04 | 2.29 | 2.85 | 3.71\nArea T. 90 114 115 116 122 136 157\nA Area T. 1.00 | 1.27 | 1.28 | 1.29 | 1.35 | 1.51 | 1.75\nPower T. 18.8 | 30.2 | 31.4 | 34.5 | 38.2 | 43.8 | 51.6\nA Power T. 1.00 | 1.60 | 1.67 | 1.83 | 2.03 | 2.33 | 2.74\nTable 3: Area [mm?] and power [W ] for the unit and the whole chip. Pallet synchronization\nconfiguration using x SSRs. Even PRA}; boosts performance to 3.1x on average close to the\n3.45x that is ideally possible with PRASS\u00ae.\nArea and Power: Table [4] reports the area per unit, and the area and power per chip. The best\nperforming PRA3\u00ae increases chip area by only 1.35x and power by only 2.19x over DaDN.\nEnergy Efficiency: Figure[I0|shows the energy efficiency of various configurations of Pragmatic\nEnergy Efficiency, or simply efficiency for a system NEW relative to BASE is defined as the ratic\nExasz/Enew of the energy required by BASE to compute all of the convolution layers over that o\nNEW. For the selected networks, STR is 16% more efficient than DaDN. The power overhead o:\nPRAsingte (PRA4p) is more than the speedup resulting in a circuit that is 5% less efficient thar\nDaDN. PRAgy reduces that power overhead while maintaining performance yielding an efficiency\nof 28%. PRALE yields the best efficiency at 48% over DaDN.\n[SS Stripes 5) PRAOb-Pallet_ GED PRAIb-Pallet_ GED PRAZb-Pallet MN PRA-2b-1F\n\n\u201cAlexnet\u2019 NIN\u201d Google SCGGM\u2014~SC SSIS geo\nFigure 9: Relative performance of Pragmatic\nusing Improved Oneffset Encoding for different\nconfigurations. Marked: performance not using\nIOE\nStripes:\n\nCohreg GS \u201cregs:\n\n l\u00e9regs\n\n(lB percolides!\nFigure 8: Relative performance of PRA2,\nwith column synchronization and as a func-\ntion of the SB registers used.\nus\n\nLo\n\nos,\n\n0.0\n\nstripes\n\ncS PRAab\n\n PRA2b\n\n= PRA2b-1R\n\nAlexnet\n\nNN\n\nGoogle\n\nVveGM\n\nVveGs\n\nVaca\n\ngeo\nFigure 10: Relative energy efficiency\nTable 4: Area [mm?] and power [W] for the unit and the whole chip for column synchronization\nand PRA\u00bb\u00bb."}, {"section_index": "6", "section_name": "6.3. IMPROVED ONEFFSET ENCODING", "section_text": "Figure [9|reports performance for Pragmatic when using the enhanced oneffset generator described\nin Section [5.1] The considered configurations include PRAg,, PRA, and PRA\u00bb (with pallet syn-\nchronization), and PRA}/*. PRAo\u00bb degrades by 7%, but the other configurations show improvements\nof 26%, 48%, and 41% respectively. A cause of degradation for PRA\u00bb is the increased spread of\noneffset values (for example, the pair of neurons 011101, 010101 takes 4 cycles with conventional\nencoding and 5 with enhanced encoding even though the total count of oneffsets is reduced from 7\nto 6)."}, {"section_index": "7", "section_name": "6.4 THE IMPACT OF SOFTWARE", "section_text": "All PRA configurations studied thus far, used software provided per layer activation precisions to\nreduce essential bit content. PRA does not require these precisions to operate. Table\nfraction of the performance benefits is due to the software guidance for PRA}#\u2019, the best configura-\ntion studied. The results demonstrate that: 1) PRA would outperform the other architectures even\nwithout software guidance, and 2) on average, software guidance improves performance by 19%.\nFigure [11] reports performance for DaDN and PRA configurations using the 8-bit quantized repre-\nsentation used in Tensorflow|Warden| (2016); Google! (2016). This quantization uses 8 bits to specify\narbitrary minimum and maximum limits per layer for the activations and the weights separately, and\nmaps the 256 available 8-bit values linearly into the resulting interval. This representation has higher\n[Stripes 5 perPall 5 perPall2be a perColtreg-2oit mM perCoHidea!\n\n6\n\n\u2018Alexnet. NIN Google. VGGM VGGS VGG19 geo\nFigure 11: Performance: 8-bit quantized repre-\nsentation (marked: without IOE)\nTable 5: Performance benefit due to software guidance\nFigure|1 I|reports performance relative to DaDN for PRA single \u00bb PRA2\u00bb, PRA3S , and PRASS\u00ae, PRA\nperformance benefits persist and are over 4.5 for PRA3R. Measuring the area and energy of these\ndesigns is left for future work, however, the absolute area and energy needed by all will be lower\ndue to the narrower representation. Moreover, given that the tile logic will occupy relatively less\narea for the whole chip and given that the SB and NM account for significant area and energy, the\noverall overheads of the PRA designs over DaDN will be lower than that measured for the 16-bit\nfixed-point configurations."}, {"section_index": "8", "section_name": "7 RELATED WORK", "section_text": "The acceleration of Deep Learning is an active area of research and has yielded numerous proposal:\nfor hardware acceleration. DaDianNao (DaDN) is the de facto standard for high-performance DNN\nicceleration{Chen et al.|(2014). In the interest of space, this section restricts attention to methods tha\nare either directly related to DaDN, or that follow a value-based approach to DNN acceleration, a:\nPragmatic falls under this category of accelerators. Value-based accelerators exploit the properties\nof the values being processed to further improve performance or energy beyond what is possible\n\nby exploiting computation structure alone. Cnvlutin {Albericio et al.| (2016) and Stripes [Judd et al\n(201 6b} Judd et al.|(2016a) are such accelerators and they have been already discussed and comparec\n\nagainst in this work.\nPuDianNao is a hardware accelerator that supports seven machine learning algorithms including\nDNNs|Liu et al] (2015). ShiDianNao is a camera-integrated low power accelerator that exploits\nintegration to reduce communication overheads and to further improve energy efficiency\n). Cambricon is the first instruction set architecture for Deep Learning |Liu et al.| (2016). Min-\nerva is a highly automated software and hardware co-design approach targeting ultra low-voltage,\nhighly-efficient DNN accelerators |Reagen et al. (2016). Eyeriss is a low power, real-time DNN ac-\ncelerator that exploits zero valued activations for memory compression and energy reduction |Chen,\nYu-Hsin and Krishna, Tushar and Emer, Joel and Sze, Vivienne) 2016). The Erficient Inference\nEngine (EIE) exploits efficient activation and weight representations and pruning to greatly reduce\ncommunication costs, to improve energy efficiency and to boost performance by avoiding certain\nineffectual computations |Han et al. 2016/Han et al. (2015). EIE targets fully-connected (FC) lay-\ners and was shown to be 12x more efficient than DaDN on FC layers, and 2x less efficient for\nconvolutional layers. All aforementioned accelerators use bit-parallel units. While this work has\ndemonstrated Pragmatic as a modification of DaDN, its computation units and potentially, its gen-\neral approach could be compatible with all aforementioned accelerator designs. This investigation\nis interesting future work.\nProfiling has been used to determine the precision requirements of a neural network for a hardwirec\nimplementation {Kim et al. (2014). EoP has been exploited in general purpose hardware and othe\napplication domains. For example, Brooks et al.|Brooks & Martonosi (1999) exploit the prefix bits\ndue to EoP to turn off parts of the datapath improving energy. Park et al. |Park et al. (2010), use <\nsimilar approach to trade off image quality for improved energy efficiency. Neither approach directly\nimproves performance.\nTo the best of our knowledge Pragmatic is the first DNN accelerator that exploits not only the pe\nlayer precision requirements of CNNs but also the essential bit information content of the activatio\nvalues. While this work targeted high-performance implementations, Pragmatic\u2019s core approac\nshould be applicable to other hardware accelerators. We have investigated Pragmatic only for in\nference and with image classification convolutional neural networks. While desirable, applying th\nsame concept to other network types, layers other than the convolutional one, is left for future work\nIt would also be interesting to study how the Pragmatic concepts can be applied to more gener\u00e9\npurpose accelerators or even graphics processors.\nflexibility and better utilization than the reduced precision approach of Stripes since the range doesnt\nhave to be symmetrical and the limits dont have to be powers of two, while still allowing straight-\nforward multiplication of the values. The limit values are set to the maximum and the minimum\nactivation values for each layer and the quantization uses the recommended rounding mode."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Jorge Albericio, Patrick Judd, Tayler Hetherington, Tor Aamodt, Natalie Enright Jerger, and An\ndreas Moshovos. Cnvlutin: Ineffectual-neuron-free deep neural network computing. In 20/\u00a2\nITEEE/ACM International Conference on Computer Architecture (ISCA). 2016.\nDavid Brooks and Margaret Martonosi. Dynamically exploiting narrow width operands to improve\nprocessor power and performance. In Proceedings of the 5th International Symposium on High\nPerformance Computer Architecture, HPCA \u00b099, pp. 13-, Washington, DC, USA, 1999. IEEE\n\nComputer Society. ISBN 0-7695-0004-8. URL http://dl.acm.org/citation.cfm\nL 20549 .822763\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor Aamodt, Natalie Enright Jerger, Raquel\nUrtasun, and Andreas Moshovos. Reduced-Precision Strategies for Bounded Memory in Deep\nNeural Nets, arXiv:1511.05236v4 [cs.LG] . arXiv.org, 2015.\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor Aamodt, and Andreas Moshovos. Stripes:\nBit-serial Deep Neural Network Computing . In Proceedings of the 49th Annual IEEE/ACM\nInternational Symposium on Microarchitecture, MICRO-49, 2016a.\nPatrick Judd, Jorge Albericio, and Andreas Moshovos. Stripes: Bit-serial Deep Neural Network\nComputing . Computer Architecture Letters, 2016b.\nfunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen\nZhiwei Xu, Ninghui Sun, and O. Temam. Dadiannao: A machine-learning supercomputer. Ir\nMicroarchitecture (MICRO), 2014 47th Annual IEEE/ACM International Symposium on, pp. 609-\n622, Dec 2014. doi: 10.1109/MICRO.2014.58.\nSong Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J.\nDally. EIE: Efficient Inference Engine on Compressed Deep Neural Network. arXiv: 1602.01528\n[cs], February 2016. URLIhttp: //arxiv.org/abs/1602.01528) arXiv: 1602.01528.\nNaveen Muralimanohar and Rajeev Balasubramonian. Cacti 6.0: A tool to understand large caches\nSynopsys. Design Compiler. http://www.synopsys.com/Tools\nImplementation/RTLS ynthesis/DesignCompiler/Pages.\nThis appendix complements the analysis of Section [2] by estimating the potential of an idealized\nPragmatic accelerator that can skip any term (product of a full precision weight and one input\nactivation bit) while also improving execution time proportionally. Note the number of terms is\nconsidered before the Improved Oneffset Encoding described in Section|5. lis applied.\nTo estimate PRA\u2019s potential, this section compares the number of terms that would be processed by\nvarious computing engines for the convolutional layers of recent CNNs (see Section |6) for the two\naforementioned baseline activation representations.\n16-bit Fixed-Point Representation: The following computing engines are considered: 1) baseline\nrepresentative of DaDN using 16-bit fixed-point bit-parallel units{Chen et al.|(2014), 2) a hypothet-\nical enhanced baseline ZN, that can skip all zero valued activations, 3) Cnvlutin (CVN) a practical\ndesign that can skip zero value activations for all but the first layer [Albericio et al.| (2016), 4) STR\nthat avoids EoP (see Table] Section|6} (2016b), 5) an ideal, software-transparent PRA,\nPRA-fp16 that processes only the essential activation bits, and 6) an ideal PRA, PRA-red, where\nsoftware communicates in advance how many prefix and suffix bits can be zeroed out after each\nlayer (see Section|5.1).\nFigure [[2a] reports the number of terms normalized over DaDN where each multiplication is ac-\ncounted for using an equivalent number of terms or equivalently additions: 16 for DaDN, ZN, and\nCVN, p for a layer using a precision of p bits for STR, and the number of essential activation bits\nfor PRA-fp16, and for PRA-red. For example, for n = 10.001 (2), the number of additions counted\nwould be 16 for DaDN and CVN+, 5 for STR as it could use a 5-bit fixed-point representation, and\n2 for PRA-fp16 and PRA-red.\nOn average, STR reduces the number of terms to 53% compared to DaDN while skipping just the\nzero valued activations could reduce them to 39% if ZN was practical and to 63% in practice with\nCVN. PRA-fp16 can ideally reduce the number of additions to just 10% on average, while with\nsoftware provided precisions per layer, PRA-red reduces the number of additions further to 8% on\nMi ZN MCVN \u00abMESTR MMPRA{pI6 MM PRAred Mm zN \u00abMPRA\n1 1\n\n0.75 0.75\n05 05\n0.25 0.25\noO i)\n0g eo WO? oP WE so 290 oo 7 co? _ wf\n(a) 16-bit fixed-point (b) 8-Bit Quantized\n\nFigure 12: Convolutional layer computational demands\n\nEERE DF\nBaa E\nBREESE\nmz\n\nM@MCVN MESTR MEMPRA(p16 MM PRAred\n\nMm zN \u00abMPRA\n| 1\n075 ors\n05 05\n0.25 0.25\ne 0\na gs s \u00bb 2 2 1\nroe w ss er yo? yor i) jo we so* co\u00ae yoo? eo we\nmz\n\nM@MCVN G@MESTR MEMPRA(p16 MM PRAred\n\nfm zn MPRA\n1 1\n0.75 0.75\n05 os\n0.25 0.25\noO i)\nn gs eo os 8 wo 2 ye\na 8 PP? TN wor \u2018ae or 0? go 8\n\n(a) 16-bit fixed-point\n\n(b) 8-Bit Quantized\n(a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision\n\neeG PER EES\n\n(c) Quantized\nFigure 13: AlexNet: Per Layer \u20191\u2019-bit Count Distributions.\naverage. The potential savings are robust across all CNNs remaining above 87% for all DNNs with\nPRA-red.\n8-bit Quantized Representation: Figure [[2b] shows the relative number of terms processed for:\n1) a bit-parallel baseline, 2) an ideal, yet impractical bit-parallel engine that skips all zero activa-\ntions, and 3) PRA. In the interest of space and since PRA subsumes STR and CVN they are not\nconsidered. Pragmatic\u2019s benefits are significant even with an 8-bit quantized representation. On\naverage, skipping all the zero valued activations would eliminate only 30% of the terms whereas\nPragmatic would remove up to 71% of the terms.\n9.2 ESSENTIAL BIT CONTENT DISTRIBUTIONS\nThis section reports the distributions of the essential bit count for the activations processed pe\nconvolutional layers for the networks studied. Three distributions are shown per network for the\nactivations for three different representations: 1) 16-bit fixed-point, 2) per layer fixed-point, and 3:\n8-bit Quantized. A peak appears for values having four bits that are 1 for the quantized representatior\nsince the value zero is mapped to a non-zero index having four bits that are one (114). Note that, a:\nin Section|9.1] the distributions are taken before Improved Oneffset Encoding.\nFigure 12: Convolutional layer computational demands\n(a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision (c) Quantized: Activations\nFigure 14: NiN: Per Layer \u20191\u2019-bit Count Distributions.\nPEEEER\n\n(a) 16-bit: Full-Precision\n\neEE ERE ER\n\n(b) 16-bit: Per Layer Precision\n\n(c) Quantized: Activations\nFigure 15: GoogLeNet: Per Layer \u20191\u2019-bit Count Distributions\n(a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision (c) Quantized: Activations\nFigure 16: VGG_M: Per Layer \u20191\u2019-bit Count Distributions.\n(a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision (c) Quantized: Activations\nFigure 17: VGG_S: Per Layer \u20191\u2019-bit Count Distributions.\n(a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision (c) Quantized: Activations\n\nsiR EE EE\nFigure 18: VGG_19: Per Layer \u20191\u2019-bit Count Distributions."}]
BkVsEMYel
[{"section_index": "0", "section_name": "INDUCTIVE BIAS OF DEEP CONVOLUTIONAL\nNETWORKS THROUGH POOLING GEOMETRY", "section_text": "Nadav Cohen & Amnon Shashua\n{cohennadav, shashuahs@cs.huji.ac.il\nOur formal understanding of the inductive bias that drives the success of convo-\nlutional networks on computer vision tasks is limited. In particular, it is uncleat\nwhat makes hypotheses spaces born from convolution and pooling operations sc\nsuitable for natural images. In this paper we study the ability of convolutional\nnetworks to model correlations among regions of their input. We theoretically\nanalyze convolutional arithmetic circuits, and empirically validate our findings\non other types of convolutional networks as well. Correlations are formalizec\nthrough the notion of separation rank, which for a given partition of the input\nmeasures how far a function is from being separable. We show that a polynomi-\nally sized deep network supports exponentially high separation ranks for certair\ninput partitions, while being limited to polynomial separation ranks for others.\nThe network\u2019s pooling geometry effectively determines which input partitions are\nfavored, thus serves as a means for controlling the inductive bias. Contiguous\npooling windows as commonly employed in practice favor interleaved partitions\nover coarse ones, orienting the inductive bias towards the statistics of natural im-\nages. Other pooling schemes lead to different preferences, and this allows tailor-\ning the network to data that departs from the usual domain of natural imagery. Ir\naddition to analyzing deep networks, we show that shallow ones support only lin-\near separation ranks, and by this gain insight into the benefit of functions brought\nforth by depth \u2014 they are able to efficiently model strong correlation under favorec\npartitions of the input."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A central factor in the application of machine learning to a given task is the inductive bias, i.e. the\nchoice of hypotheses space from which learned functions are taken. The restriction posed by the\ninductive bias is necessary for practical learning, and reflects prior knowledge regarding the task\nat hand. Perhaps the most successful exemplar of inductive bias to date manifests itself in the use\nof convolutional networks (LeCun and Bengio|(1995)) for computer vision tasks. These hypothe-\nses spaces are delivering unprecedented visual recognition results (e.g. (2012)\nSzegedy et al. (2015); Simonyan and Zisserman 2014); He et al.|(2015)), largely responsible fot\nthe resurgence of deep learning (LeCun et al. (2015). Unfortunately, our formal understanding ot\nthe inductive bias behind convolutional networks is limited \u2014 the assumptions encoded into these\nmodels, which seem to form an excellent prior knowledge for imagery data, are for the most part a\nmystery.\nExisting works studying the inductive bias of deep networks (not necessarily convolutional) do sc\nin the context of depth efficiency, essentially arguing that for a given amount of resources, mor\u00ab\nlayers result in higher expressiveness. More precisely, depth efficiency refers to a situation where\na function realized by a deep network of polynomial size, requires super-polynomial size in orde:\nto be realized (or approximated) by a shallower network. In recent years, a large body of researcl\nwas devoted to proving existence of depth efficiency under different types of architectures (see fo:\n\ni 5 3 2014); |Telgarsky\n;[Eldan and Shamirj(2015):|P ). Nonetheless, despite\nue ae attention it is receiving, depth ee does not convey the complete story behind the\ninductive bias of deep networks. While it does suggest that depth brings forth functions that are\notherwise unattainable, it does not explain why these functions are useful. Loosely speaking, the"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "hypotheses space of a polynomially sized deep network covers a small fraction of the space of al\nfunctions. We would like to understand why this small fraction is so successful in practice.\nOur analysis approaches the study of inductive bias from the direction of function inputs. Specifi-\ncally, we study the ability of convolutional arithmetic circuits to model correlation between regions\nof their input. To analyze the correlations of a function, we consider different partitions of input\nregions into disjoint sets, and ask how far the function is from being separable w.rt. these parti-\ntions. Distance from separability is measured through the notion of separation rank (Beylkin and|\n2002)), which can be viewed as a surrogate of the L* distance from the closest sepa-\nrable function. For a given function and partition of its input, high separation rank implies that the\nfunction induces strong correlation between sides of the partition, and vice versa.\nWe show that a deep network supports exponentially high separation ranks for certain input par-\ntitions, while being limited to polynomial or linear (in network size) separation ranks for others.\nThe network\u2019s pooling geometry effectively determines which input partitions are favored in terms\nof separation rank, i.e. which partitions enjoy the possibility of exponentially high separation rank\nwith polynomial network size, and which require network to be exponentially large. The standard\nchoice of square contiguous pooling windows favors interleaved (entangled) partitions over coarse\nones that divide the input into large distinct areas. Other choices lead to different preferences, for\nexample pooling windows that join together nodes with their spatial reflections lead to favoring par-\ntitions that split the input symmetrically. We conclude that in terms of modeled correlations, pooling\ngeometry controls the inductive bias, and the particular design commonly employed in practice ori-\nents it towards the statistics of natural images (nearby pixels more correlated than ones that are fat\napart). Moreover, when processing data that departs from the usual domain of natural imagery, priot\nknowledge regarding its statistics can be used to derive respective pooling schemes, and accordingly\ntailor the inductive bias.\nWith regards to depth efficiency, we show that separation ranks under favored input partitions ar\nexponentially high for all but a negligible set of the functions realizable by a deep network. Shallov\nnetworks on the other hand, treat all partitions equally, and support only linear (in network size\nseparation ranks. Therefore, almost all functions that may be realized by a deep network requir\na replicating shallow network to have exponential size. By this we return to the complete dept\nefficiency result of (2016b), but with an added important insight into the benefit o\nfunctions brought forth by depth \u2014 they are able to efficiently model strong correlation under favore\u00ab\npartitions of the input.\nThe remainder of the paper is organized as follows. Sec.|2|provides a brief presentation of necessary\nbackground material from the field of tensor analysis. Sec.|3}describes the convolutional arithmetic\ncircuits we analyze, and their relation to tensor decompositions. In sec. [4] we convey the concept o:\nseparation rank, on which we base our analyses in sec. and [6] The conclusions from our analyse:\nare empirically validated in sec.|7| Finally, sec.|8]concludes.\nA specific family of convolutional networks gaining increased attention is that of convolutional\narithmetic circuits. These models follow the standard paradigm of locality, weight sharing and pool-\ning, yet differ from the most conventional convolutional networks in that their point-wise activations\nare linear, with non-linearity originating from product pooling. Recently, an-\nalyzed the depth efficiency of convolutional arithmetic circuits, showing that besides a negligible\n(zero measure) set, all functions realizable by a deep network require exponential size in order to be\nrealized (or approximated) by a shallow one. This result, termed complete depth efficiency, stands in\ncontrast to previous depth efficiency results, which merely showed existence of functions efficiently\nrealizable by deep networks but not by shallow ones. Besides their analytic advantage, convolu-\ntional arithmetic circuits are also showing promising empirical performance. In particular, they are\nequivalent to SimNets \u2014 a deep learning architecture that excels in computationally constrained set-\ntings (Cohen and Shashual 014); (2016a)), and in addition, have recently been utilized\nfor classification with missing data (Sharir et al.|(2016)). Motivated by these theoretical and practi-\ncal merits, we focus our analysis in this paper on convolutional arithmetic circuits, viewing them as\nrepresentative of the class of convolutional networks. We empirically validate our conclusions with\nboth convolutional arithmetic circuits and convolutional rectifier networks \u2014 convolutional networks\n\nwith rectified linear (ReLU, (2010)) activation and max or average pooling. Adap-\n\ntation of the formal analysis to networks of the latter type, similarly to the adaptation of the analysis\n\nin| (2016b) carried out by/Cohen and Shashua| (2016), is left for future work."}, {"section_index": "3", "section_name": "2 PRELIMINARIES", "section_text": "The analyses carried out in this paper rely on concepts and results from the field of tensor analysis.\nIn this section we establish the minimal background required in order to follow our arguments|],\n\nreferring the interested reader to|Hackbusch| (2012) for a broad and comprehensive introduction to\nthe field.\nThe core concept in tensor analysis is a tensor, which for our purposes may simply be thought of as\na multi-dimensional array. The order of a tensor is defined to be the number of indexing entries in\nthe array, which are referred to as modes. The dimension of a tensor in a particular mode is defined\nas the number of values that may be taken by the index in that mode. For example, a 4-by-3 matrix\nis a tensor of order 2, i.e. it has two modes, with dimension 4 in mode 1 and dimension 3 in mode 2\nIf A is a tensor of order N and dimension M; in each mode i \u20ac (N| := {1,...,.N}, the space of all\nconfigurations it can take is denoted. quite naturally. by R441 *\u00b0**\nA fundamental operator in tensor analysis is the tensor product, which we denote by \u00ae. It is an\noperator that intakes two tensors A \u20ac RM*\"*Mp and B \u20ac RMP+1%\"*Mp+e (orders P and\nQ respectively), and returns a tensor A @ B \u20ac R\u21221*\"*Mp+e (order P + Q) defined by: (A @\nB)dy..dpsq = Ady...dp * Bap4:...dpyq- Notice that in the case P = Q = 1, the tensor product\nreduces to the standard outer product between vectors, i.e. if u \u20ac R\u2122 and v \u20ac R\u2122, then u \u00ae v is\nno other than the rank-1 matrix uv\u2019 \u20ac RMx\u2122M2_\nWe now introduce te important concept of Matvicizavion, which is essenually the rearrangement Or\na tensor as a matrix. Suppose A is a tensor of order N and dimension M; in each mode i \u20ac [NN], and\nlet (I, J) be a partition of [N], i.e. I and J are disjoint subsets of [N] whose union gives [NV]. We\n\nmay write J = {%1,...,ij7)} where ij < --- < ijy, and similarly J = {j1,...,j),,} where a <\n+ < jyyj. The matricization of A w.rt. the partition (I, J), denoted [.A]],,7, is the mt, i.-by-\n\nTl Jl Mj, matrix holding the entries of A such that Ag, ...ay is placed in row index 1 + shi tL\n\nTeh M;,, and column index 1 + yi JI ( i, \u2014 1) Wl at Mj,,. If 1 = or J = 0, then by\n\ndefinition [A] 7,7 is a row or column nee vector of dimension TT, M; holding Ag, ...ay\nin entry 1 + CH -1) Tees My.\nA well known matrix operator is the Kronecker product, which we denote by \u00a9. For two matrices\nA \u20ac R%*\u21222 and B \u20ac R\u00ae*N2, AB is the matrix in RM1N1*M2N2 holding A;; By, in row\nindex (i \u2014 1)N, + k and column index (j \u2014 1)N2 +1. Let A and B be tensors of orders P and Q\nrespectively, and let (I, J) be a partition of [P + Q]. The basic relation that binds together the tensor\nproduct, the matricization operator, and the Kronecker product, is:\n[A \u00ae B] rs = [Al intpyantr} \u00a9 [IB] \u2014pynjaj,(s\u2014Pynte\nwhere J \u2014 P and J \u2014 P are simply the sets obtained by subtracting P from each of the elements\nin J and J respectively. In words, eq. [Iimplies that the matricization of the tensor product between\nA and B w.r.t. the partition (I, J) of [P + Q], is equal to the Kronecker product between two\nmatricizations: that of A w.r.t. the partition of [P] induced by the lower values of (J, J), and that\nof B w.r.t. the partition of [Q] induced by the higher values of (J, ./)."}, {"section_index": "4", "section_name": "3 CONVOLUTIONAL ARITHMETIC CIRCUITS", "section_text": "The convolutional arithmetic circuit architecture on which we focus in this paper is the one con-\nsidered in{Cohen et al. (2016b), portrayed in fig.[I]a). Instances processed by a network are rep-\nresented as N-tuples of s-dimensional vectors. They are generally thought of as images, with the\ns-dimensional vectors corresponding to local patches. For example, instances could be 32-by-32\nRGB images, with local patches being 5 x 5 regions crossing the three color bands. In this case\nassuming a patch is taken around every pixel in an image (boundaries padded), we have N = 1024\nand s = 75. Throughout the paper, we denote a general instance by X = (x1,..., Xy), with\nX,...Xy \u20ac R* standing for its patches.\n' The definitions we give are actually concrete special cases of more abstract algebraic definitions as give!\nn{Hackbusch| . We limit the discussion to these special cases since they suffice for our needs and ar\nier to grasp.\n(a)\n\nhidden layer 0\n\nhidden layer L-1\n\ninput X representation 1x1 conv pooling td conv\nfm le dense\nLe ee (output)\na\nM Tp 2 Vr, Y\nrep(i.d)= fy, (x,) pool,(j.7)=[] conv,(j'7) \u2014 pook(v)= [] conv. (i'.7)\ncony, (j.7) = (a rep i)) 7 window j {covers space out(y)=(a a, pool,., ())\n(b) hidden layer\n\ninput X\n\nrepresentation, 1x1 conv\n\na\nii\nrep(i,d) = fy, (x;)\ncony(i,7) = (a rep (i.:))\npool (7) = II conv(i,y)\n\ncovers space\n\nM\n\nglobal\npooling\n\ndense\n(output)\n\nout(y) =\n\n(c) (10% J own)\n\n(ro, sh\")\n\n(a'\u201d, pool ())\nFigure 1: Best viewed in color. (a) Convolutional arithmetic circuit architecture analyzed in this\npaper (see description in sec.[). (b) Shallow network with global pooling in its single hidden layer.\n(c) Illustration of input patch ordering for deep network with 2 x 2 pooling windows, along with\npatterns induced by the partitions (I\u00b044, Je\u2019e\u201d) and (1/e\u201d, Jha) (eq.[8]and|9]respectively).\nThe first layer in a network is referred to as representation. It consists of applying M repre-\nsentation functions fo,...fo,;, : R\u00b0 \u2014 R to all patches, thereby creating M feature maps. In\nthe case where representation functions are chosen as fg,(x) = o(w}x + ba), with parameters\n04 = (wa, ba) \u20ac R* x R and some point-wise activation o(-), the representation layer reduces to\na standard convolutional layer. More elaborate settings are also possible, for example modeling the\nrepresentation as a cascade of convolutional layers with pooling in-between. Following the repre-\nsentation, a network includes L hidden layers indexed by | = 0. ..L \u2014 1. Each hidden layer / begins\nwith a 1 x 1 conv operator, which is simply a three-dimensional convolution with r; channels and\nfilters of spatial dimensions 1-by-1. || This is followed by spatial pooling, that decimates feature\nmaps by taking products of non-overlapping two-dimensional windows that cover the spatial extent.\nThe last of the L hidden layers (J = L \u2014 1) reduces feature maps to singletons (its pooling operator is\nglobal), creating a vector of dimension r,_1. This vector is mapped into Y network outputs through\na final dense linear layer.\nAltogether, the architectural parameters of a network are the type of representation functions (f9,),\nthe pooling window shapes and sizes (which in turn determine the number of hidden layers L),\nand the number of channels in each layer (WZ for representation, rp...rz,_\u2014 1 for hidden layers, Y\nfor output). Given these architectural parameters, the learnable parameters of a network are the\nrepresentation weights (9, for channel d), the conv weights (a7 for channel 7 of hidden layer 1),\nand the output weights (a4 for output node y).\nFor a particular setting of weights, every node (neuron) in a given network realizes a function\nfrom (R\u00b0)* to R. The receptive field of a node refers to the indexes of input patches on which\nits function may depend. For example, the receptive field of node 7 in channel -y of conv oper-\n> |Cohen et al.| consider two settings for the 1 x 1 conv operator. The first, referred to as weight\nsharing, is the one described above, and corresponds to standard convolution. The second is more general,\nallowing filters that slide across the previous layer to have different weights at different spatial locations. It is\nshown in|Cohen et al.| that without weight sharing, a convolutional arithmetic circuit with one hidden\nlayer (or more) is universal, i.e. can realize any function if its size (width) is unbounded. This property is\nimperative for the study of depth efficiency, as that requires shallow networks to ultimately be able to replicate\nany function realized by a deep network. In this paper we limit the presentation to networks with weight\naring, which are not universal. We do so because they are more conventional, and since our entire analysis\nis oblivious to whether or not weights are shared (appli to both settings). The only exception is where\nwe reproduce the depth efficiency result of [Cohen et al.| . There, we momentarily consider networks\nwithout weight sharing.\ny \u20ac [YJ here is an output node index, and hy, is the function realized by that node. AY is a tensor of\norder N and dimension M in each mode, with entries given by polynomials in the network\u2019s conv\nweights {a!-7},, and output weights a\u201d. Hereafter, terms such as function realized by a network\nor coefficient tensor realized by a network, are to be understood as referring to hy or AY respectively.\nNext, we present explicit expressions for A\u201d under two canonical networks \u2014 deep and shallow.\nAY\nwe\n\norder 4\"=N\n\na\nX\n\na=1\n\nTL-1\n\na=1\n\nal. @t\u00a2\n\nquy\nay\u201d OO\n\nl\u2014l,a\n\n4 ,L\u2014l,a\n\nle {2...D-1},7\u20ac [ri]\nShallow network. The second network we pay special attention to is shallow, comprising a single\nhidden layer with global pooling \u2014 see illustration in fig. [i]. The linear weights of such a network\nare {a\u00b0\"Y \u20ac RV } y\u20ac{ro] for hidden conv operator and {a'Y \u20ac R\"\u00b0},,c(y] for dense output operator.\nThey determine the coefficient tensor AY (eq.|2) as follows:\nAY =)5 a\nYe\nay\": @N aor\n\ny=\nwhere ay! stands for entry y of a\u2019\u2019\u00a5, and again, the symbol \u00ae with a superscript represents a\nrepeated tensor product. The tensor decomposition in eq.[4]is an instance of the classic CP decom-\nposition, also known as rank-1 decomposition (see/Kolda and Bader] (2009) for a historic survey).\nTo conclude this section, we relate the background material above, as well as our contribution de-\nscribed in the upcoming sections, to the work of [Cohen et al.|(2016b). The latter shows that with\nator at hidden layer 0 is {j}, and that of an output node is [N], corresponding to the entire in-\nput. Denote by hy,,,;) the function realized by node j of channel y in conv operator at hidden\nlayer l, and let I7-J) C [N] be its receptive field. By the structure of the network it is evident\nthat 1\u201c) does not depend on 7, so we may write J\u20184J) instead. Moreover, assuming pooling\nwindows are uniform across channels (as customary with convolutional networks), and taking into\naccount the fact that they do not overlap, we conclude that I-71) and I(J2) are necessarily dis-\njoint if j1Aj2. A simple induction over 1 = 0... \u2014 1 then shows that hi,,;) may be expressed\n\nas hi.y,j)(Xiys +++ Xin) = on ay 1 Ard) Tl, fox, (Xi,), Where {t1,..., ir} stands for the\n\nreceptive field I\u2018), and A(473) is a tensor of order T\u2019 = |I\u2018)| and dimension M in each mode,\nwith entries given by polynomials in the network\u2019s conv weights {a!7},,. Taking the induction\none step further (from last hidden layer to network output), we obtain the following expression for\nfunctions realized by network outputs:\nDeep network. Consider a network as in fig. [Tfa), with pooling windows set to cover four entries\neach, resulting in L = log, N hidden layers. The linear weights of such a network are {a\u00b07 \u20ac\nR\u2122 } fro) for conv operator in hidden layer 0, {a7 \u20ac R\u2122-\"},\u00a2f,,) for conv operator in hidden\nlayer] = 1...L \u2014 1, and {a\u2019\u00a5 \u20ac R\"\u2014'} ,e[y] for dense output operator. They determine the\ncoefficient tensor AY (eq.[2) through the following recursive decomposition:\na7 and aX here are scalars representing entry a in the vectors a!-? and a\u201d respectively, and the\nsmbol \u00ae with a superscript stands for a repeated tensor product, e.g. @4a%* := a\u00ae* Qa\" @a?@\na\u00b0*. To verify that under pooling windows of size four AY is indeed given by eq.|3] simply plug\nthe rows of the decomposition into eq. [2] starting from bottom and continuing upwards. For context,\neq.[3]describes what is known as a hierarchical tensor decomposition (see chapter 11 in[Hlackbusct]\n(2012)), with underlying tree over modes being a full quad-tree (corresponding to the fact that\nnetwork\u2019s pooling windows cover four entries each).\narbitrary coefficient tensors A\u2019, functions hy as in eq.[2|form a universal hypotheses space. It is then\nshown that convolutional arithmetic circuits as in fig.{I{a) realize such functions by applying tensor\ndecompositions to A\u2019, with the type of decomposition determined by the structure of a network\n(number of layers, number of channels in each layer etc.). The deep network (fig.|1{a) with size-4\npooling windows and L = log, N hidden layers) and the shallow network (fig. fb) presented here-\ninabove are two special cases, whose corresponding tensor decompositions are given in eq. B]and [4]\nrespectively. The central result in|Cohen et al.|(2016b) relates to inductive bias through the notion of\ndepth efficiency \u2014 it is shown that in the parameter space of a deep network, all weight settings but a\nset of (Lebesgue) measure zero give rise to functions that can only be realized (or approximated) by\na shallow network if the latter has exponential size. This result does not relate to the characteristics\nof instances X = (x1,...,Xy), it only treats the ability of shallow networks to replicate functions\nrealized by deep networks.\nIn this paper we draw a line connecting the inductive bias to the nature of X, by studying the\nrelation between a network\u2019s architecture and its ability to model correlation among patches x;.\nSpecifically, in sec./4]we consider partitions (I, J) of [N] (IUJ = [N], where U stands for disjoint\nunion), and present the notion of separation rank as a measure of the correlation modeled between\nthe patches indexed by J and those indexed by J. In sec. the separation rank of a network\u2019s\nunction hy, w.r.t. a partition (J, J) is proven to be equal to the rank of [A] ;,; \u2014 the matricization\nof the coefficient tensor AY w.r.t. (I,J). Sec derives lower and upper bounds on this rank\n\u2018or a deep network, showing that it supports exponential separation ranks with polynomial size\nor certain partitions, whereas for others it is required to be exponentially large. Subsequently,\nsec. establishes an upper bound on rank[A\u00a5]];,; for shallow networks, implying that these\nmust be exponentially large in order to model exponential separation rank under any partition, and\nthus cannot efficiently replicate a deep network\u2019s correlations. Our analysis concludes in sec.\nwhere we discuss the pooling geometry of a deep network as a means for controlling the inductive\nbias by determining a correspondence between partitions (J, J) and spatial partitions of the input.\nFinally, we demonstrate experimentally in sec.|7/how different pooling geometries lead to superior\nperformance in different tasks. Our experiments include not only convolutional arithmetic circuits,\nbut also convolutional rectifier networks, i.e. convolutional networks with ReLU activation and max\nor average pooling."}, {"section_index": "5", "section_name": "4 SEPARATION RANK", "section_text": "In this section we define the concept of separation rank for functions realized by convolutiona\narithmetic circuits (sec.|3), i.e. real functions that take as input X = (x1,...,xw) \u20ac (R\u00b0)%. The\nseparation rank serves as a measure of the correlations such functions induce between different set:\nof input patches, i.e. different subsets of the variable set {x,, as\nLet (I,J) be a partition of input indexes, i.e. I and J are disjoint subsets of [N] whose union\ngives [N]. We may write I = {i1,...,ij7)} where i, <--+ < ijy, and similarly J = {j1,..., js}\n\nwhere j1 < ++: < jj, Er a function h : (IR\u00b0)\u201c \u2014 R, the separation rank w.r.t. the partition (I, J)\n5\nsep(h; I, J) := min {R \u20ac NU {0}: Jg..-gr: (R\u00b0)! SR, gt..-gp : (RI! SR st.\n\nR\nh(x1,.--,Xn) = an Gu(Xizs+ + Xi, OLX\u00bb Lee 2.)}\nIn words, it is the minimal number of summands that together give h, where each summand is sepa-\nrable w.rt. (I, J), i.e. is equal to a product of two functions \u2014 one that intakes only patches indexec\nby J, and another that intakes only patches indexed by J. One may wonder if it is at all possible\nto express h through such summands, i.e. if the Separation rank of h is finite. From the theory o:\ntensor products between L? spaces (see|Hackbusch| [Hackbusch]{2012) for a comprehensive coverage), we know\nthat any hE L?((R\u00b0)), ie. any h that is kbs POL? and square-integrable, may be approximatec\narbitrarily well by summations of the form )77_, g.(xi,,.--; Xi) (Xj,,+++,%Xjj,))- Exact real-\nization however is only guaranteed at the limit R \u2014 oo, thus in general the separation rank of /\nneed not be finite. Nonetheless, as we show in sec.|5| for the class of functions we are interested ir\nnamely functions realizable by convolutional arithmetic circuits, separation ranks are always finite\nThe concept of separation rank was introduced in |Beylkin and Mohlenkamp (2002 for numeri\ncal treatment of high-dimensional functions, and has since been employed for various applications\ne.g. quantum Re eT aoe (2003)), particle engineering (Hackbusch] (2006)) and ma.\nchine learning (Beylkin et al.|(2009)). If the separation rank of a function w.r.t. a partition of it:\ninput is equal to 1, the function is separable, meaning it does not model any interaction betwee1\nthe sets of variables. Specifically, if sep(h; I,J) = 1 then there exist g : (R*)!1 > R anc\ng! : (IRS)'7l \u2014 R such that A(x),...,%N) = gins.) Xiy,,)9' (Kj s+ ++Xjy)), and the func\ntion h cannot take into account consistency between the values of {x;,,...,X%),,} and those o\n{Xjpye ees Xj) }. Ina statistical setting, if h is a probability density function, this would mean tha\n{Xi,,-+-, Xi, } and {x;,,...,X,),, } are statistically independent. The higher sep(h; I, J) is, the\nfarther h is from this situation, i.e. the more it models dependency between {x;,,...,X\u00e9),, } anc\n\n{Xj,,-+-,X\u00a5),, }, or equivalently, the stronger the correlation it induces between the patches indexec\nby J and those indexed by J.\nThe interpretation of separation rank as a measure of deviation from separability is formalized in\napp. [BI where it is shown that sep(h; I, J) is closely related to the L? distance of h from the set of\nseparable functions w.r.t. (I, J). Specifically, we define D(h; I, J) as the latter distance divided by\nthe L? norm of hf]. and show that sep(h; I, J) provides an upper bound on D(h; J, J). While it is\nnot possible to lay out a general lower bound on D(h; I, J) in terms of sep(h; I, J), we show that the\nspecific lower bounds on sep(h; I, J) underlying our analyses can be translated into lower bounds\non D(h; I,J). This implies that our results, facilitated by upper and lower bounds on separation\nranks of convolutional arithmetic circuits, may equivalently be framed in terms of L? distances from\nseparable functions."}, {"section_index": "6", "section_name": "5 CORRELATION ANALYSIS", "section_text": "In this section we analyze convolutional arithmetic circuits (sec.|3) in terms of the correlations they\ncan model between sides of different input partitions, i.e. in terms of the separation ranks (sec.[4) they\nsupport under different partitions (I, J) of [N]. We begin in sec. 5] establishing a correspondence\nbetween separation ranks and coefficient tensor matricization ranks. This correspondence is ther\nused in sec. \u2018o analyze the deep and shallow networks (respectively) presented in seo.B\nWe note that we focus on these particular networks merely for simplicity of presentation \u2014\nanalysis can easily be adapted to account for alternative networks with different depths and pooling\nschemes.\n\nbh"}, {"section_index": "7", "section_name": "5.1 FROM SEPARATION RANK TO MATRICIZATION RANK", "section_text": "Let hy be a function realized by a convolutional arithmetic circuit, with corresponding coefficient\ntensor AY (eq. 2h. Denote by (J, J) an arbitrary partition of [N], i.e. IUJ = [N]. We are inter-\nested in studying sep(hy; I, J) \u2014 the separation rank of hy w.r.t. (I, J) (eq. [5p. As claim[I]below\nstates, assuming representation functions { fg, }ae[az| are linearly independent (if they are not, we\ndrop dependent functions and modify AY accordingly []), this separation rank is equal to the rank\nof [A\"]7,7 \u2014 the matricization of the coefficient tensor AY w.r.t. the partition (I, J). Our problem\nthus translates to studying ranks of matricized coefficient tensors.\nClaim 1. Let hy be a function realized by a convolutional arithmetic circuit (fig.|1{a)), with corre-\nsponding coefficient tensor AY (eq. Assume that the network\u2019s representation functions fo, are\nlinearly independent, and that they, as well as the functions g,,g/, in the definition of separation\nAs the linear weights of a network vary, so do the coefficient tensors (A\") it gives rise to. Ac:\ncordingly, for a particular partition (I,J), a network does not correspond to a single value o:\nrank|.A\u00a5];,7, but rather supports a range of values. We analyze this range by quantifying its maxi\nmum, which reflects the strongest correlation that the network can model between the input patches\nindexed by I and those indexed by J. One may wonder if the maximal value of rank[.A\u00a5]];,, is the\nappropriate statistic to measure, as a-priori, it may be that rank[A\u00a5]],; is maximal for very few o\nthe network\u2019s weight settings, and much lower for all the rest. Apparently, as claim [2|below states\nthis is not the case, and in fact rank[.A\u00a5]7,; is maximal under almost all of the network\u2019s weigh\nsettings.\nClaim 2. Consider a convolutional arithmetic circuit (fig.|J{a)) with corresponding coefficient ten-\nsor AY (eq. AY depends on the network\u2019s linear weights \u2014 {al}, , and a\u2019, thus for a given\npartition (I, J) of [N], rankl]A\u00a5] r,z is a function of these weights. This function obtains its maxi-\nmum almost everywhere (w.r.t. Lebesgue measure)."}, {"section_index": "8", "section_name": "5.2 DEEP NETWORK", "section_text": "In this subsection we study correlations modeled by the deep network presented in sec. (fig. [Tfa)\nwith size-4 pooling windows and L = log, N hidden layers). In accordance with sec. we do so\nby characterizing the maximal ranks of coefficient tensor matricizations under different partitions.\nRecall from eq. ]the hierarchical decomposition expressing a coefficient tensor AY realized by the\ndeep network. We are interested in matricizations of this tensor under different partitions of [NV].\nLet (J, J) be an arbitrary partition, i.e. [UJ = [N]. Matricizing the last level of eq. B]w.r-t. (I, J),\nwhile applying the relation in eq.|1} gives:\n[A] 2.0 _ ae aby . [oe-he @ grobe @ gro he @ ods\n\na=1\n\nTL-1 oy. L-1, L-1,\n\u00bb~ ag [\u00a2 \u201c@\u00a2 \u201c] IN[2-42-1], Jn [2-4-2]\n\na=1\n\nL-1, L-1,\nole ag] (1-2-44-1)q[2-42-1],(J\u20142-42-1)r[2-4\u00a3-1\n4-1. we obtain:\n\nApplying eq.|1| again, this time to matricizations of the tensor \u00a2\u00a2~):* @ @\n[A\u00a5lns=>5 ake: i} Cara ne ee\na=1\n\u00b0 Square-integrability of representation functions fo, may seem as a limitation at first glance, as for example\nneurons fo,(x) = o(wix + ba), with parameters 0g = (wa, ba) \u20ac R* x R and sigmoid or ReLU activation\no(-), do not meet this condition. However, since in practice our inputs are bounded (e.g. they represent image\npixels by holding intensity values), we may view functions as having compact support, which, as long as they\nare continuous (holds in all cases of interest), ensures square-integrability.\n. || In[42-1], Jn[44-1]\n\n\u00a9 [or] (148-1) [4-1], (J 42-1) [4-2]\n\n\u00a9 [obs\n\ni) [or] (12-421) [42-1], (J\u20142-4E-1)n [4-1]\n\n(1-3-4\u00a3-1) [42-2], (J\u20143-42-1) [44-3]\n3,...,.N\u2014-1} sever = {2.4,..., N}\nThe upper bound in theorem |1| is expressed via constants c!-\", defined recursively over levels\n1 = 0...L \u2014 1, with k ranging over 1...N/4! for each level 1. What prevents c* from grow-\ning double-exponentially fast (w.r.t. 1) is the minimization with M\u2122\u2122{l/.\u00ab1:l/.1}_ Specifically, if\nmin{|Zi,4| , |Ji,4|} is small, ie. if the partition induced by (J, J) on the k\u2019th size-4! group of patches\nis unbalanced (most of the patches belong to one side of the partition, and only a few belong to the\nother), c!* will be of reasonable size. The higher this takes place in the hierarchy (i.e. the larger /\nis), the lower our eventual upper bound will be. In other words, if partitions induced by (J, J) on\nsize-4! patch groups are unbalanced for large values of J, the upper bound in theorem|1|will be small.\nFor example, consider the partition (I'\u00b0\u201d, J\u20199\") defined by:\nyi\n={1\n/2}\nJl\nigh\n={N,\n/2+\n1\n}\nTp = (I -\n\n(k\u20141)-4)n [44\n\nJin (J \u2014(k\u20141)- 4!) [44\na=1 %\n\naM _\n16\u00b0 Vn aah k = \u00bb ag\u201d: fa Vo say teFo.ace aye 7 \u20aclri]\nSS =\n\nM14 l-py-Ml714l\n\n, Tia yp A\n1\u00a2], an = an al . Ole Leys ace-ayperr\u2014aaqe\u2014ayge 8 E {2...[D\u20141},7\u20ac [ri\n= =\nailiel py arte!\n; rh 4 _\n\nM\\py-M 7!\nEq_[7|expresses [A\"] 1,7 - the matricization w.r.t. the partition (J, J) of a coefficient tensor AY real-\nized by the deep network, in terms of the network\u2019s conv weights {a!-7},., and output weights a\u2019.\nAs discussed above, our interest lies in the maximal rank that this matricization can take. Theorem\nbelow provides lower and upper bounds on this maximal rank, by making use of eq.|7| and of the\nrank-multiplicative provertv of the Kronecker product (rank( A4A@B) = rank(A)-rank( B)).\nTheorem 1. Let (I,J) be a partition of [N], and [A\"]1,; be the matricization wrt. (I, J) of a\ncoefficient tensor AY (eq.|2) realized by the deep network (fig. [Tfa) with_size-4 pooling windows).\nFor every | \u20ac {0...L \u2014 1} and k \u20ac [N/4'], define I,,,, and Ji, as in eq.|6| Then, the maximal rank\nthat | AY]; can take (when network weights vary) is:\n\u00a9 No smaller than min{ro, M}5, where S := \\{k \u20ac [N/4]: Lin AOA Sin A 9}.\n\n\u00a9 No greater than min{ M\u2122\u2122{ULD rp Th, ch1), where c&* := 1 fork \u20ac [N], a\nchk min{mintlt kl |SeeI py_y Th, TARDY for LE 7 - \u201cak \u20ac [N/44.\nThe lower bound in theorem [I] is exponential in S, the latter defined to be the number of size-4\npatch groups that are split by the partition (I, J), i.e. whose indexes are divided between J and J.\nPartitions that split many of the size-4 patch groups will thus lead to a large lower bound. For\nexample. consider the partition (1\u00b022, J\u00b0&\u2019e\u201d) defined as follows:\nUnder (I'\u00b0\u201d, J\u20199\"), all partitions induced on size-4\"~! patch groups (quadrants of [N]) are com\n\npletely one-sided (min{|I_\u20141,4|, |Jz\u20141,x|} = 0 for all k \u20ac [4]), resulting in the upper bound bein;\nno greater than r;,_, \u2014 linear in network size."}, {"section_index": "9", "section_name": "5.3. SHALLOW NETWORK", "section_text": "We now turn to study correlations modeled by the shallow network presented in sec. [3] (fig. {I{b)).\nIn line with sec. |5.1| this is achieved by characterizing the maximal ranks of coefficient tensor\nmatricizations under different partitions.\n[A] 1,7 = an ayy. (olla) (olla) '\nolla and \u00a9!/la%7 here are column vectors of dimensions MM!!! and M!\u00a5! respectively, stand-\ning for the Kronecker products of a\u00b07 \u20ac R\u2122 with itself |J| and |.J| times (respectively). Eq. [10\nimmediately leads to two observations regarding the ranks that may be taken by [A\u00a5],,7. First\nthey depend on the partition (I, J) only through its division size, i.e. through |J| and |.J|. Second\nthey are no greater than min{ min{|Z|,|J 1} ro}, meaning that the maximal rank is linear (or less)\nin network size. In light of sec. [5.TJand 5.2} these findings imply that in contrast to the deep net-\nwork, which with polynomial size supports exponential separation ranks under favored partitions.\nthe shallow network treats all partitions (of a given division size) equally, and can only give rise to\nan exponential separation rank if its size is exponential.\nSuppose now that we would like to use the shallow network to replicate a function realized by <\npolynomially sized deep network. So long as the deep network\u2019s function admits an exponentia\nseparation rank under at least one of the favored partitions (e.g. (I\u00b0\u00a24, Jere\u201d) \u2014 ea.) the shallov\nnetwork would have to be exponentially large in order to replicate it, i.e. depth efficiency take:\nplace. |\u2019| Since all but a negligible set of the functions realizable by the deep network give rise tc\nmaximal separation ranks (sec Ft we obtain the complete depth efficiency result of [Cohen et al\n(2016b). However, unlike (2016b), which did not provide any explanation for the\nusefulness of functions brought forth by depth, we obtain an insight into their utility \u2014 they are abl\nto efficiently model strong correlation under favored partitions of the input.\nThe deep network presented in sec. [3] whose correlations we analyzed in sec. [5.2] was defined\nas having size-4 pooling windows, i.e. pooling windows covering four entries each. We have yet\n7 Convolutional arithmetic circuits as we have defined them (sec.|3) are not universal. In particular, it may\nvery well be that a function realized by a polynomially sized deep network cannot be replicated by the shallow\nnetwork, no matter how large (wide) we allow it to be. In such scenarios depth efficiency does not provide\ninsight into the complexity of functions brought forth by depth. To obtain a shallow network that is universal,\nthus an appropriate gauge for depth efficiency, we may remove the constraint of weight sharing, i.e. allow\nthe filters in the hidden conv operator to hold different weights at different spatial locations (see|Cohen et al.\nfor proof that this indeed leads to universality). All results we have established for the original shallow\nnetwork remain valid when weight sharing is removed. In particular, the separation ranks of the network are\n\nnew indeed halde\n\naarin ite cizve Thie Imnliee that ac cnageactead danth ofG-:\nTo summarize this discussion, theorem[T] states that with the deep network, the maximal rank of a\ncoefficient tensor matricization w.r.t. (I, /), highly depends on the nature of the partition (I, J) \u2014 it\nwill be exponentially high for partitions such as (I\u00b0\u201c4, J\u00b0\u2019\u00b0\u201d), that split many size-4 patch groups,\nwhile being only polynomial (or linear) for partitions like (I!\u00b0\u201d, J\u20199\"), under which size-4! patch\ngroups are unevenly divided for large values of |. Since the rank of a coefficient tensor matricization\nw.r.t. (I, J) corresponds to the strength of correlation modeled between input patches indexed by I\nand those indexed by J (sec.[5.1}, we conclude that the ability of a polynomially sized deep network\nto model correlation between sets of input patches highly depends on the nature of these sets.\nRecall from eq. [4] the CP decomposition expressing a coefficient tensor AY realized by the shallow\nnetwork. For an arbitrary partition (I, J) of [N], i.e. IUJ = [N], matricizing this decomposition\nwith repeated application of the relation in eq.|1| gives the following expression for [A\u00a5] 7,7 \u2014 the\nmatricization w.r.t. (I, J) of a coefficient tensor realized by the shallow network:\nto specify the shapes of these windows, or equivalently, the spatial (two-dimensional) locations\nof nodes grouped together in the process of pooling. In compliance with standard convolutional\nnetwork design, we now assume that the network\u2019s (size-4) pooling windows are contiguous square\nblocks, i.e. have shape 2 x 2. Under this configuration, the network\u2019s functional description (eq. 2\nwith AY given by eq.(3) induces a spatial ordering of input patches['], which may be described by\nthe following recursive process:\nWith this spatial ordering (illustrated in fig.[I[c)), partitions (I, J) of [N] convey a spatial pattern.\nFor example, the partition (\u00b0?4, J\u00b0\u2019\u00b0\") (eq.[8) corresponds to the pattern illustrated on the left of\nfig [I{c), whereas ([/\u00b0\", jhig\u2019) (eq. |9) corresponds to the pattern illustrated on the right. Our anal-\nysis (sec. [5.2 shows that the deep network is able to model strong correlation under ([\u00b0\u201c4, J\u00b0\u2019e\u201d),\nwhile being inefficient for modeling correlation under (I'\u00b0\u201d, J\"9\"), More generally, partitions for\nwhich S, defined in theorem[I] is high, convey patterns that split many 2 x 2 patch blocks, i.e. are\nhighly entangled. These partitions enjoy the possibility of strong correlation. On the other hand,\npartitions for which min{|Z;,x| , |Ji,x|} is small for large values of | (see eq. [6]for definition of I),\nand Jj,~) convey patterns that divide large 2! x 2! patch blocks unevenly, i.e. separate the input to\ndistinct contiguous regions. These partitions, as we have seen, suffer from limited low correlations.\nWe conclude that with 2 x 2 pooling, the deep network is able to model strong correlation betweer\ninput regions that are highly entangled, at the expense of being inefficient for modeling correlatior\nbetween input regions that are far apart. Had we selected a different pooling regime, the preference\nof input partition patterns in terms of modeled correlation would change. For example, if pooling\nwindows were set to group nodes with their spatial reflections (horizontal, vertical and horizontal.\nvertical), coarse patterns that divide the input symmetrically, such as the one illustrated on the righ\nof fig. {Ifc), would enjoy the possibility of strong correlation, whereas many entangled pattern:\nwould now suffer from limited low correlation. The choice of pooling shapes thus serves as a means\nfor controlling the inductive bias in terms of correlations modeled between input regions. Square\ncontiguous windows, as commonly employed in practice, lead to a preference that complies witt\nour intuition regarding the statistics of natural images (nearby pixels more correlated than distan\nones). Other pooling schemes lead to different preferences, and this allows tailoring a network tc\ndata that departs from the usual domain of natural imagery. We demonstrate this experimentally ir\nthe next section, where it is shown how different pooling geometries lead to superior performance\nin different tasks."}, {"section_index": "10", "section_name": "7 EXPERIMENTS", "section_text": "The main conclusion from our analyses (sec.[5]and|6) is that the pooling geometry of a deep convo-\nlutional network controls its inductive bias by determining which correlations between input regions\ncan be modeled efficiently. We have also seen that shallow networks cannot model correlations\nefficiently, regardless of the considered input regions. In this section we validate these assertions\nempirically, not only with convolutional arithmetic circuits (subject of our analyses), but also with\nconvolutional rectifier networks \u2014 convolutional networks with ReLU activation and max or average\n8 The network\u2019s functional description assumes a one-dimensional full quad-tree grouping of input pate!\nindexes. That is to say, it assumes that in the first pooling operation (hidden layer 0), the nodes correspond\ning to patches x1, X2, X3, X4 are pooled into one group, those corresponding to x5, x6, X7, Xs are pooled int\nanother, and so forth. Similar assumptions hold for the deeper layers. For example, in the second poolin;\noperation (hidden layer 1), the node with receptive field {1, 2,3, 4}, i.e. the one corresponding to the quadru\nple of patches {x1,x2,x3, x4}, is assumed to be pooled together with the nodes whose receptive fields ar\n{5,6,7,8}, {9, 10, 11, 12} and {13, 14,15, 16}.\nwe WME Ey EMA UE Ey DUP TEE Pawns tw A.\n\ne For! = 1,...,L = log4N: Replicate the already-assigned top-left 2'~!-by-2'\u20141 block o\nindexes, and place copies on its right, bottom-right and bottom. Then, add a 4'~! offset t\nall indexes in the right copy, a 2 - 4'~! offset to all indexes in the bottom-right copy, anc\na3-4!\u2014! offset to all indexes in the bottom copy.\nFigure 2: Sample of images from our synthetic classification benchmark. Each image displays a\nrandom blob with holes, whose morphological closure and left-right symmetry about its center are\nmeasured. Two classification tasks are defined \u2014 one for closedness and one for symmetry. In each\ntask, the objective is to distinguish between blobs whose respective property (closedness/symmetry)\nis high, and ones for which it is low. The tasks differ in nature \u2014 closedness requires modeling\ncorrelations between neighboring pixels, whereas symmetry requires modeling correlations between\npixels and their reflections.\nOur experiments are based on a synthetic classification benchmark inspired by medical imaging\ntasks. Instances to be classified are 32-by-32 binary images, each displaying a random distorted\noval shape (blob) with missing pixels in its interior (holes). For each image, two continuous scores\nin range [0, 1] are computed. The first, referred to as closedness, reflects how morphologically closed\na blob is, and is defined to be the ratio between the number of pixels in the blob, and the number\nof pixels in its closure (see app. [D] for exact definition of the latter). The second score, named\nsymmetry, reflects the degree to which a blob is left-right symmetric about its center. It is measured\nby cropping the bounding box around a blob, applying a left-right flip to the latter, and computing the\nratio between the number of pixels in the intersection of the blob and its reflection, and the number\nof pixels in the blob. To generate labeled sets for classification (train and test), we render multiple\nimages, sort them according to their closedness and symmetry, and for each of the two scores, assign\nthe label \u201chigh\u201d to the top 40% and the label \u201clow\u201d to the bottom 40% (the mid 20% are considered\nill-defined). This creates two binary (two-class) classification tasks \u2014 one for closedness and one\nfor symmetry (see fig. [2] for a sample of images participating in both tasks). Given that closedness\nis a property of a local nature, we expect its classification task to require a predictor to be able to\nmodel strong correlations between neighboring pixels. Symmetry on the other hand is a property\nthat relates pixels to their reflections, thus we expect its classification task to demand that a predictor\nbe able to model correlations across distances.\nWe evaluated the deep convolutional arithmetic circuit considered throughout the paper (fig. |I{a)\nwith size-4 pooling windows) under two different pooling geometries. The first, referred to as\nsquare, comprises standard 2 x 2 pooling windows. The second, dubbed mirror, pools together\nnodes with their horizontal, vertical and horizontal-vertical reflections. In both cases, input patches\n(x;) were set as individual pixels, resulting in N = 1024 patches and L = logy N = 5 hidden\nlayers. M = 2 representation functions (f9,) were fixed, the first realizing the identity on binary\ninputs (f9, (b) = b for b \u20ac {0, 1}), and the second realizing negation (f9, (b) = 1\u2014b for b \u20ac {0, 1}).\nClassification was realized through Y = 2 network outputs, with prediction following the stronger\nactivation. The number of channels across all hidden layers was uniform, and varied between 8\nand 128. Fig. 3] shows the results of applying the deep network with both square and mirror pool-\ning, to both closedness and symmetry tasks, where each of the latter has 20000 images for training\nand 4000 images for testing. As can be seen in the figure, square pooling significantly outperforms\nmirror pooling in closedness classification, whereas the opposite occurs in symmetry classification.\nThis complies with our discussion in sec. [6] according to which square pooling supports modeling\ncorrelations between entangled (neighboring) regions of the input, whereas mirror pooling puts fo-\ncus on correlations between input regions that are symmetric w.r.t. one another. We thus obtain a\ndemonstration of how prior knowledge regarding a task at hand may be used to tailor the inductive\nbias of a deep convolutional network by designing an appropriate pooling geometry.\nIn addition to the deep network, we also evaluated the shallow convolutional arithmetic circuit an-\nalyzed in the paper (fig.[I{b)). The architectural choices for this network were the same as those\nclosedness: low closedness: high closedness: low closedness: high\n\nsymmetry: low symmetry: high symmetry: high\naccuracy [%]\n\nclosedness task\n\nsymmetry task\n\n100 100\n- ial a 7.\n95 * 95 e\n90 = 90\n&\n=\n8s ie # 4\n80 = 80 =.= square pool - train\nm= square pool - test\n75 75 x mirror pool - train\n\u00bb\u2014\u00ab mirror pool - test\n70 70\n0 30 a5 80 30 100300 0 30 a5 $5 30 yoo aad 240\n\nbreadth (# of channels in each hidden layer)\n\nbreadth (# of channels in each hidden layer)\naccuracy [%]\n\nclosedness task\n\nsymmetry task\n\n100 100\n2 o a = o\n95 * gasp ee\n90 = 90\nX\nfy\n85 a a a ee eee . 4\n80 \u00a9 80 mm square pool - train\na square pool - test\n78 78 x mirror pool - train\n\u00bb mirror pool - test\n70, 70,\n0 20 40 60 80 100 120 740 0 20 40 60 80 100 120 T40\n\nbreadth (# of channels in each hidden layer)\n\nbreadth (# of channels in each hidden layer)\nFigure 3: Results of applying a deep convolutional arithmetic circuit to closedness and symmetry\nclassification tasks. Two pooling geometries were evaluated \u2014 square, which supports modeling cor\nrelations between neighboring input regions, and mirror, which puts focus on correlations betwee1\nregions that are symmetric w.r.t. one another. Each pooling geometry outperforms the other on the\ntask for which its correlations are important, demonstrating how prior knowledge regarding a tas}\nat hand may be used to tailor the inductive bias through proper pooling design.\ndescribed above for the deep network besides the number of hidden channels, which in this cas\napplied to the network\u2019s single hidden layer, and varied between 64 and 4096. The highest train an\ntest accuracies delivered by this network (with 4096 hidden channels) were roughly 62% on closed\nness task, and 77% on symmetry task. The fact that these accuracies are inferior to those of th\ndeep network, even when the latter\u2019s pooling geometry is not optimal for the task at hand, complie\nwith our analysis in sec. [5] Namely, it complies with the observation that separation ranks (correlz\ntions) are sometimes exponential and sometimes polynomial with the deep network, whereas wit\nthe shallow one they are never more than linear in network size.\nFinally, to assess the validity of our findings for convolutional networks in general, not just convolu.\ntional arithmetic circuits, we repeated the above experiments with convolutional rectifier networks\nNamely, we placed ReLU activations after every conv operator, switched the pooling operation fron\nproduct to average, and re-evaluated the deep (square and mirror pooling geometries) and shallov\nnetworks. We then reiterated this process once more, with pooling operation set to max instead o:\naverage. The results obtained by the deep networks are presented in fig. |4| The shallow network\nwith average pooling reached train/test accuracies of roughly 58% on closedness task, and 55% or\nsymmetry task. With max pooling, performance of the shallow network did not exceed chance. Al.\ntogether, convolutional rectifier networks exhibit the same phenomena observed with convolutiona\narithmetic circuits, indicating that the conclusions from our analyses likely apply to such network:\nas well. Formal adaptation of the analyses to convolutional rectifier networks, similarly to the adap:\n\ntation of (Cohen et al.|(2016b) carried out in{Cohen and Shashua| (2016), is left for future work."}, {"section_index": "11", "section_name": "8 DISCUSSION", "section_text": "Our analysis shows that a polynomially sized deep convolutional arithmetic circuit supports expo\nnentially high separation ranks for certain input partitions, while being limited to polynomial or lin\near (in network size) separation ranks for others. The network\u2019s pooling window shapes effectivel;\ndetermine which input partitions are favored in terms of separation rank, i.e. which partitions enjoy\nthe possibility of exponentially high separation ranks with polynomial network size, and which re\nquire network to be exponentially large. Pooling geometry thus serves as a means for controlling th\ninductive bias. The particular pooling scheme commonly employed in practice \u2014 square contiguou:\nwindows, favors interleaved partitions over ones that divide the input to distinct areas, thus orient\nthe inductive bias towards the statistics of natural images (nearby pixels more correlated than distan\nThrough the notion of separation rank, we studied the relation between the architecture of a convolu-\ntional network, and its ability to model correlations among input regions. For a given input partition,\nthe separation rank quantifies how far a function is from separability, which in a probabilistic setting,\ncorresponds to statistical independence between sides of the partition.\naccuracy [%]\n\naccuracy [%]\n\nDeep convolutional rec\nclosedness task\n\njer network (average pooling)\n\nsymmetry task\n\n100 100\nFH il tal * 7 oat\n95 95\n90 = 90\nx\n>\nas zs et\n80 = 80 =.= square pool - train\nm= square pool - test\n75 75 x mirror pool - train\n\u00bb\u2014\u00ab mirror pool - test\n70 70\n0 30 a5 80 310020 Tao 0 30 a5 $5 a i009 aad 140\nbreadth (# of channels in each hidden layer) breadth (# of channels in each hidden layer)\nDeep convolutional rectifier network (max pooling)\n100 closedness task 100 symmetry task\n95 95\n90 Po\nwee, ne =\nral z\nas zs\n80 x \u00a9 80 @-@ square pool - train\n== square pool - test\n75 75 x mirror pool - train\n\u00bb%\u2014* mirror pool - test\n70 70\n0 30 35 80 310012040 0 30 a0 Gy a 100120140\n\nbreadth (# of channels in each hidden layer)\n\nbreadth (# of channels in each hidden layer)\naccuracy [%]\n\ncloseaness task\n\nsymmetry task\n\n100 100\n. as\nost a* . 95\n90 = 90\nX\nfy\n85 e* eee\n80 \u00a9 80 mm square pool - train\na square pool - test\n78 78 x mirror pool - train\n\u00bb mirror pool - test\n70, 70,\n0 20 40 60 80 100 120 740 0 20 40 60 80 100 120\n\n140\nFigure 4: Results of applying deep convolutional rectifier networks to closedness and symmetry\nclassification tasks. The same trends observed with the deep convolutional arithmetic circuit (fig. BI\nare apparent here.\nones). Other pooling schemes lead to different preferences, and this allows tailoring the network to\ndata that departs from the usual domain of natural imagery.\nAs opposed to deep convolutional arithmetic circuits, shallow ones support only linear (in networ!\nsize) separation ranks. Therefore, in order to replicate a function realized by a deep network (ex\nponential separation rank), a shallow network must be exponentially large. By this we derive th\ndepth efficiency result of Cohen et al-|(2016b), but in addition, provide an insight into the benefit 0\nfunctions brought forth by depth \u2014 they are able to efficiently model strong correlation under favorec\npartitions of the input.\nFinally, our analyses and results bring forth the possibility of expanding the coverage of correlations\nefficiently modeled by a deep convolutional network. Specifically, by blending together multiple\npooling geometries in the hidden layers of a network, it is possible to facilitate simultaneous support\nfor a wide variety of correlations suiting data of different types. Investigation of this direction, from\nboth theoretical and empirical perspectives, is viewed as a promising avenue for future research."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work is supported by Intel grant ICRI-CI #9-2012-6133, by ISF Center grant 1790/12 and\nby the European Research Council (TheoryDL project). Nadav Cohen is supported by a Google\nDoctoral Fellowship in Machine Learning.\nWe validated our conclusions empirically, with convolutional arithmetic circuits as well as convolu-\ntional rectifier networks \u2014 convolutional networks with ReLU activation and max or average pooling.\nOur experiments demonstrate how different pooling geometries lead to superior performance in dif-\nferent tasks. Specifically, we evaluate deep networks in the measurement of shape continuity, a task\nof a local nature, and show that standard square pooling windows outperform ones that join together\nnodes with their spatial reflections. In contrast, when measuring shape symmetry, modeling cor-\nrelations across distances is of vital importance, and the latter pooling geometry is superior to the\nconventional one. Shallow networks are inefficient at modeling correlations of any kind, and indeed\nlead to poor performance on both tasks."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Richard Bellman. /ntroduction to matrix analysis, volume 960. SIAM, 1970.\nRichard Caron and Tim Traynor. The zero set of a polynomial. WSMR Report 05-02, 2005.\nNadav Cohen and Amnon Shashua. Convolutional rectifier networks as generalized tensor decompositions\nInternational Conference on Machine I earnine (ICMT.) 2016.\nNadav Cohen, Or Sharir, and Amnon Shashua. Deep simnets. JEEE Conference on Computer Vision and\nPattern Recognition (CVPR), 2016a.\nhomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.\nRobert M Haralick, Stanley R Sternberg, and Xinhua Zhuang. Image analysis using mathematical morphology\nIEEE transactions on pattern analysis and machine intelligence. (4):532\u2014550. 1987.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arX\npreprint arXiv: 1512.03385, 2015.\nFrank Jones. Lebesgue integration on Euclidean space. Jones & Bartlett Learning, 2001\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Convolutiona\nNeural Networks. Advances in Neural Information Processing Systems, pages 1106-1114, 2012.\nYann LeCun and Yoshua Bengio. Convolutional networks for images, speech, and time series. The handbook\nof brain theory and neural networks, 3361(10), 1995.\nNadav Cohen and Amnon Shashua. Simnets: A generalization of convolutional networks. Advances in Neural\nInformation Processing Systems (NIPS), Deep Learning Workshop, 2014.\nNadav Cohen, Or Sharir, and Amnon Shashua. On the expressive power of deep learning: A tensor analysis.\nConference On Learning Theory (COLT), 2016b.\nOlivier Delalleau and Yoshua Bengio. Shallow vs. deep sum-product networks. In Advances in Neural Infor-\nmation Processing Systems, pages 666-674, 2011.\nNolfgang Hackbusch. Tensor Spaces and Numerical Tensor Calculus, volume 42 of Springer Series in Com-\nputational Mathematics. Springer Science & Business Media, Berlin, Heidelberg, February 2012.\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar-\nrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of\nthe 22nd ACM international conference on Multimedia. pages 675-678. ACM. 2014.\nYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436-444, May 2015\n/inod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceec\nines of the 27th International Conterence on Machine | earnine (ICMT-]0). nages 807-814. 92010.\nRazvan Pascanu, Guido Montufar, and Yoshua Bengio. On the number of inference regions of deep fee:\nforward networks with piece-wise linear activations. arXiv preprint arXiv, 1312, 2013.\nis. international series in pure and applied mathematics, 1991.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan\nVincent Vanhoucke, and Andrew Rabinovich. Going Deeper with Convolutions. CVPR, 2015.\nMatus Telgarsky. Representation benefits of deep feedforward networks. arXiv preprint arXiv: 1509.08101,\n2015.\nHrushikesh Mhaskar, Qianli Liao, and Tomaso Poggio. Learning real and boolean functions: When is deep\nbetter than shallow. arXiv preprint arXiv: 1603.00988, 2016.\nsuido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions o\ndeep neural networks. In Advances in Neural Information Processing Systems, pages 2924-2932, 2014.\nTomaso Poggio, Fabio Anselmi, and Lorenzo Rosasco. I-theory on depth vs width: hierarchical function\ncomposition. Technical report, Center for Brains, Minds and Machines (CBMM), 2015."}, {"section_index": "14", "section_name": "A.1 PROOF OF CLAIM", "section_text": "We prove the equality in two steps, first showing that sep(hy; I, J)<rank[A]7, 7, and then establishing the\nconverse. The first step is elementary, and does not make use of the representation functions\u2019 (fo,) linear\nindependence, or of measurability/square-integrability. The second step does rely on these assumptions, and\nemploys slightly more advanced mathematical machinery. Throughout the proof, we assume without loss of\ngenerality that the partition (I, J) of [N] is such that I takes on lower values, while J takes on higher ones.\nThat is to say, we assume that J = {1,...,|Z|} and J = {|J| + 1,. NYP]\n\u2014\n\n[Ans = SS wy)\n\nv=1\n\nR a vy\n\nYB Tut \u00a9 (\"ots\nR a vy\n\nYB Lnam.sotin \u00a9 (CVa-iryetizics-ipaty\nR\n\n\u2014_[\u201d acre\nwhere the third equality relies on the assumption J = {1,..., |Z|}, J = {|Z|+1,...,.N}, the fourth equality\nmakes use of the relation in eq.}1} and the last equality is based on the linearity of the matricization operator.\nSince matricizations are merely rearrangements of tensors, the fact that [A\u00a5]7,7 = Ine, B\u2019 @C\"] 1,7 implies\nA= \u201c1 BY \u00aeC\", or equivalently, AZ a, = =>\", BY, dyn) Cayrjyadn For every di...dn \u20ac [M].\nPlugging this into eq.2] gives:\nM IN\nhy (1,--.,XN) = Alb, dy II, fe; (xi)\n\naH wdy=1\nM R vv vy N\n\n= ~, udy=l an Bay. dir) Carat: dy II, 1 fa; (xi)\nR M al\n\nan (oy yet Bay...dy) I, fou, (xi) ))\n\nM v\n(Oe agen Chinsiete TD yy teus 069)\n\ndiij4i\n\u00b0 To see that this does not limit generality, denote J = {i1,...,47)} and J = {ji,..., J) yj}, and\ndefine an auxiliary function h\u2019, by permuting the entries of hy such that those indexed by J are on the\nleft and those indexed by J on the right, \u00a32. hy (Xi, ,---, Xi), Xjrs+--)Xiy)) = Ry(X1,---, Xn). Ob-\nviously sep(hy; I,J) = sep(hi,;1', J\u2019), where the partition cae J\u2019) is defined by I\u2019 = {1,...,|Z|} and\nJ\u2019 = {{I| +1,...,N}. Analogously to the definition of h\u2019,, let A\u201d be the tensor obtained by permut-\ning the modes of AY such that those indexed by J are on the left and those indexed by J on the right,\nie. Al, . It is not difficult to see that matricizing A\u2019\u201d w.r.t. (I\u2019, J\u2019) is equivalent\n\n. o = = Ab.\nFi 7) A145) 5\n\nto matricizing AY wrt. (I, J), ie. LA\" ey = [A\" 1,2, and in particular rankLA\u201d] py) = rankA\u00a5] 1,0.\nMoreover, since by definition A\u201d is a coefficient tensor corresponding to hy (eq.|2), A\u201d will be a coefficient\ntensor that corresponds to h/,. Now, our proof will show that sep(hi,; 1\u2019, y) = = rank[ A] 1,37, which, in\n\nlight of the equalities above, \u2018implies sep(hy; I, J) = rank|.A\"] 1.7, as required.\nTo prove that sep(hy; I, J)<rank[A\"] 1,7, denote by R the rank of [.A\u00a5]]7,7. The latter is an M\"l-by-M\\VI\nmatrix, thus there exist vectors uj...uR \u20ac RM and v1...vR \u20ac R\u2122! \u201d| such that [A%] 7.7 = aan uy a\nFor every v \u20ac [R], let BY be the tensor of order |J| and dimension M in each mode whose arrangement as a\ncolumn vector gives u,, i.e. whose matricization w.r.t. the partition ({|J|],@) is equal to u,. Similarly, let C\u201d,\nvy \u20ac [R], be the tensor of order |. J| = N \u2014 |J| and dimension M in each mode whose matricization w.r.t. the\npartition (@. [|.7!]) (arrangement as a row vector) is equal to v_ . It holds that:\nLew\n\nUwv,\nv=1\na LB'Tin.0 \u00a9 [Clon\nBY Drogensoury \u00a9 [CVa-petsi.e-tpensi\nIB\u2019 acre\n\nR Vis vy\n\nJo\" Bact].\nM IZ]\n\nv x;\n\nQu(X1y--. 5X) = ayn Bes wd LT, fea, ( )\nlJI\n\nvy x;\n\ngu(X1,+++,Xy)) = y 4 dy p= way LI, fou, )\nSubstituting these into eq./T]leads to:\nFor proving the converse inequality, i.e. sep(hy; I, J)>rank|[A\"] 1,1, we rely on basic concepts and result\n\nfrom functional analysis, or more specifically, from the topic of L? spaces. While a full introduction to thi\n\ntopic is beyond our scope (the interested reader is referred to|Ri 11991)), we briefly lay out here the minima\n\nbackground required in order to follow our proof. For any n \u20ac N, L aus formally defined as the Hilbet\nTO\n\nspace of Lebesgue measurable square-integrable real functions over R\u201d|'\"|, equipped with standard (point\nwise) addition and scalar multiplication, as well as the inner product defined by integration over point-wis\nmultiplication. For our purposes, L? (R\") may simply be thought of as the (infinite-dimensional) vector spac\nof functions g : R\" \u2014 R satisfying Se < 0x, with inner product defined by (gi, 92) := J gi-g2. Our proo\nwill make use of the following basic facts related to L? spaces:\nFact 1. If V is a finite-dimensional subspace of L\u201d(R\"), then any g\u20acL* (R\") may be expressed as g = p+ 6\nwith p\u20acV and 6\u20acV~ (i.e. 5 is orthogonal to all elements in V). Moreover, such a representation is unique, s\u00a2\nin the case where gEV, we necessarily have p = g and 6 = 0.\nFact 2. If g\u20acL?(R\u201d), g\u2019 \u20acL?(R), then the function (x1, x2)+g(x1)-g\u2019 (x2) belongs to L?(R\u201d x R\u2122).\nFact 3. Let V and V' be finite-dimensional subspaces of L?(R\") and L?(R\") respectively, and de-\nfine UCL?(R\u201d x R\") to be the subspace spanned by {(x1,x2)>p(x1)-p'(x2) : pEV,p'EV'}. Given\ngEL?(R\"), g \u20acL?(R\u2122), consider the function (x1,x2)+g(x1)-g' (x2) in L?(R\" x R\u2122). This function be-\nlongs to U+ if geV* or g'EV't.\nFact 4. /f gi...gm\u20acL*\u201c(R\") are linearly independent, then for any k \u20ac N, the set of function\n{(x1,...,Xn) T., Ga; (Xi) }a,...d,\u20ac[m) is linearly independent in L?((R\")*).\nTo facilitate application of the theory of L? spaces, we now make use of the assumption that the network\u2019s\nrepresentation functions f9,, as well as the functions g,,, g in the definition of separation rank (eq.[5), are mea-\nsurable and square-integrable. Taking into account the expression given in eq.22]for hy, as well as\none readily sees that fo,...fo,,\u20acL\u00b0(R*) implies hy\u20acL?((R*)*). The separation rank sep(hy; I, J) will be\nthe minimal non-negative integer R such that there exist gi...gr\u20acL7((R*)!\"!) and g}...gpEL?((R\u00b0)'7!) for\nwhich:\nR\nhy(x1,-.-,%v) = SO Ge Rts XIr))90 (Xr 4a)\n. os\nyey fo\nxi\n\n]\n(M\n-di\\\u20ac\ndy.\n\n2 (@)!\")\ncL\n\nJ|\nLD? (@)! )\n\nL2 (@)*)\nspan {(X1,-..,Xnv)->p(K1,.--,X7))-P (Ky 41,---.xN) 1 pEV, p'EV}\n'\u00a9 More precisely, elements of the space are equivalence classes of functions, where two functions are con\nsidered equivalent if the set in R\u201d on which they differ has measure zero.\nft\nhy (x1,..-,%N) = SO Ge ts XI) 90 (Xr 4a)\nWe would like to show that sep(hy; I, J)>rank[.A\"]1,,. Our strategy for achieving this will be to start from\nand derive an expression for [.A\u201d] 1,7 comprising a sum of R rank-1 matrices. As an initial step along\nthis path, define the following finite-dimensional subspaces:\nhy(x1,...,XN) a EE wes X79 (Kr445 ++.) Xv)\n\nee (pv (x1, -- +, jr) + Ov (1, +++, xI7))\n\n(PL (Xirj\u00a21y +X) + 8, (Xir445 +++ XN)\n\nR\n~ DO Pe Rts XI) PL Rr pas wv)\nn a\n4D Pe tye x11) 8b Out XN)\n\nR\n+ Bu (1, 0+ X21) PL (pyar XN)\n\nR t\n+ Bu (X15 -6 +X 1])-06 rp pay + XN)\nFor every v \u20ac [R], let B\u201d and C\u201d be coefficient tensors of py and pj, w.r.t. the functions that span V and V\u2019\n, respectively. Put formally, B\u2019 and C\u201d are tensors of orders |J| and |.J| (respectively), with\nin each mode, meeting:\nR M vv pal\nhy (X1,-.-,XN) an (oy dyn Ba an lL, fea, 0)\nM v\n: (= . aya Cit: \u201cdi II. =||41 fa, 0)\n\ndiijqi\n\nR M BY CY\nen Ya. dy a1 dir) air) pad TI, fe (xi)\nM\n\n~ La, wdy=1 (, Bar dis) Cains ay) TD, few (x)\n\nmpare this expression for hy to that given in eq-[2]\n4 R a vy 4 R vy L\nAddy =O Bana) Cai yrdy Wh.\u00bb .dw \u20ac(M] => AY = 8 ac\nMatricizing the tensor equation on the right w.r.t. (I, J) giv\nwhere the second equality is based on the linearity of the matricization operator, the third equality relies on the\nrelation in eq.[I] and the last equality makes use of the assumption J = {1,...,|I|}, J = {|Z| +1,..., N}.\nGiven that U is the span of products from V and V\u2019 (eq.{16), and that pp EV, 6,\u20acV+, pL eV\u2019, LeV'+, one\nreadily sees that the first term in the latter expression belongs to U, while, according to fact[3] the second, third\nand fourth terms are orthogonal to U. We thus obtained an orthogonal decomposition of hy w.r.t. U. Since hy\ncontained in U, the orthogonal component must vanish (fact/1), and we amount at:\nR\n\nhy (X1,..-,XN) = pees,\n\nX1)):PL (Xr 445+\nM\n\nain TI, Soa, (xi)\nsayy) LL, fou, (&:)\n\nPr(X1, +++; X24)\n\nLose\nM\n\n|\nM\nI\n.\nBB\n&\n\nPi (X15 +. Xy|)\n\n|\nM\n\nt\n\n.\n&\n\nLose\n[2,18 2c}\nv=1 LJ\nR\nR a vy\nD8\u2019 Loun. sun \u00a9 (CVa-upausncs-upeusi\n\nR Vv\nSO B'Tur.0 \u00a9 (CDonan\n\nv=1\nFor every v \u20ac [R], [B\u201d]{\\z1),0 is a column vector of dimension M\"' and [C\u2019]o,\\\\71) is a row vector of dimen.\nsion M/!. Denoting these by u, and v,! respectively, we may write:\n[A%] 1,2 = ye ULV,\n\nT\n\nv"}, {"section_index": "15", "section_name": "A.2. PROOF OF CLAIM", "section_text": "The claim is framed in measure theoretical terms, and in accordance, so will its proof be. While a complet\nntroduction to measure theory is beyond our scope (the interested reader is referred to ), we brief\nsonvey here the intuition behind the concepts we will be using, as well as facts we rely upon. The Lebesgu\nneasure is defined over sets in a Euclidean space, and may be interpreted as quantifying their \u201cvolume\u201d. Fc\n-xample, the Lebesgue measure of a unit hypercube is one, of the entire space is infinity, and of a finite set c\nd0ints is zero. In this context, when a phenomenon is said to occur almost everywhere, it means that the s\u00ab\nof points in which it does not occur has Lebesgue measure zero, i.e. is negligible. An important result we wi\nnake use of (proven in{Caron and Traynor] for example) is the following. Given a polynomial define\nover n real variables, the set of points in R\u201d on which it vanishes is either the entire space (when the polynomit\nn question is the zero polynomial), or it must have Lebesgue measure zero. In other words, if a polynomial :\n10t identically zero, it must be different from zero almost everywhere.\nHeading on to the proof, we recall from sec. |3}that the entries of the coefficient tensor A\u201d (eq.|2) are given\nby polynomials in the network\u2019s conv weight alt and output weights a\u2019\"\u201d. Since [.A\u00a5]7,7 \u2014 the ma-\ntricization of A\u201d w.r. the partition (J, J), is merely a rearrangement of the tensor as a matrix, this matrix\ntoo has entries given by polynomials in the network\u2019s linear weights. Now, denote by r the maximal rank\ntaken by [.A\u201d] 7,7 as network weights vary, and consider a specific setting of weights for which this rank is\nattained. We may assume without loss of generality that under this setting, the top-left r-by-r block of [.A\u00a5] 7,7\nis non-singular. The corresponding minor, i.e. the determinant of the sub-matrix ([.A\u00a5Jz,7)i:r.1:r, is thus a\npolynomial defined over {al yu and a\u2019 which is not identically zero. In light of the above, this polynomial\nis different from zero almost everywhere, implying that rank([.A\"]7,7)1:r,1:r = 7 almost everywhere. Since\nrank[A\"]1,7>rank([A\"] 1,7)1:r,1:r, and since by definition r is the maximal rank that [.A\u00a5]];,7 can take, we\nhave that rank].A\"] ;.7 is maximal almost everywhere."}, {"section_index": "16", "section_name": "A.3 PROOF OF THEOREM[L", "section_text": "The matrix decomposition in eq. |7] expresses [.A] 7,7 in terms of the network\u2019s linear weights \u2014 fa? \u20ac\n\nRY }ye[ro] for conv operator in hidden layer 0, fal? \u20ac R\u2122-1}.\u00a21,) for conv operator in hidden layer\n\n1=1...L\u20141, anda\u201d \u20ac R\"!\u2014? for node y of dense output operator. We prove lower and upper bounds on the\nmaximal rank that [Al], can take as these weights vary. Our proof relies on the rank-multiplicative property\nof the Kronecker product (rank(A@B) = rank(A)-rank(B) for any real matrices A and B \u2014 se\n0) for proof), but is otherwise elementary.\n>y<min{ro, Mf\n, otherwise\n\n7S! tort =2...b\u20141\n\n, otherwise\nLet n \u20ac [N/4]. Recalling the definition of [i,x and Ji,\u00ab from eq.[6] consider the sets Ii,n and Ji,n, as well\nas Ip.4(n\u20141) 44 and Jo,a(n\u20141) +4 for t \u20ac [4]. (Lin, Jin) is a partition of [4], ie. LinUJin = [4], and for every\nt \u20ac [4] we have Ip a(n\u20141y)4e = {1} and Jo.a(n\u20141) 44 = Vif t belongs to yn, and otherwise [o,4(n\u20141)44 = 0\nThis shows that rank[.A\u201d]7,7<R. Since R is a general non-negative integer that admits eq we may take\nit to be minimal, i.e. to be equal to sep(hy; I, J) \u2014 the separation rank of hy w.r.t. (I, J). By this we obtain\nrank| A] 1.7<sep(hy; I, J), which is what we set out to prove.\nBeginning with the lower bound, consider the following weight setting (e, here stands for a vector holding 1\nin entry 7 and 0 at all other entries, 0 stands for a vector holding 0 at all entries, and 1 stands for a vector\nholding 1 at all entries, with the dimension of a vector to be understood by context):\nand Jo,a(n\u20141)4+ = {1} if t belongs to Ji,n. This implies that for an arbitrary vector v, the matricization\nIv]n... og... is equal to v if t\u00a2J;.,, and to v' if t\u20acJ;.,. Accordingly, for any y \u20ac [rol]:\n4\nOy _\nOle Vio .a\u00a2n\u20141) 40-0 4(n\u20141)+t \u2014\n\n(a7 \u00a9 a\u2019? Oa? Oa\u201d)\n(a7 Oat o a\u00ae)(a07)T\n(a io) a7) (a7 \u00a9 aoy)T\n(a\u00ae7)(a\u00ae7 eat adryt\n(a\u00b07 \u00a9 a7 \u00a9a\u00ae \u00a9 a\u201d)!\n\n;|Lin| = 4 |Jin| = 0\nin| =3 |Jijn| =1\n> Han] = 2 |Jin| = 2\nsan] =1 [Jin] =3\nslLin| =0 |Jiyn| = 4\nAssume that 7 < min{ro, /}. By our setting a\u2019*\u2019 = ey, so the above matrix holds 1 in a single entry and 0\nin all the rest. Moreover, if the matrix is not a row or column vector, i.e. if both J),, and J1,\u00bb are non-empty.\nthe column index and row index of the entry holding 1 are both unique w.rt. y, i.e. they do not repeat as 4\nranges over 1... min{ro, MM}. We thus have:\nmin{ro,M} 4\n\nyal t=\n\n0, min{ro, M\n\u00a9la \"to sin-rysedoaen-oye) ={ 1 fro, My\nro a1 Adio, \u2014 f minfro,M} jin 40 A Jin 40\nrank (=, ay * Ola \"Woan-nyaeroan-nee) =~ { 1 lh =0V Jin =0\nyl _ min{ro,M} jin AOA Sin #0\nrank [\u00a2 Dinwtim ~) 1 in=9V Jin =9\nN/4\n\n[A\u2019]n7= \u00a2 fod, tJiyt\n\nt=1\n\n~\nrank[A\u2018]1,7 = Il, rank[$ \"Jr 4.4.4 = min{ro, my}\nwhere S := |{t \u20ac [N/4] : Lis AOA Sie F O}|. This equality holds for the specific weight setting we define:\nin eq.[19] Maximizing over all weight settings gives the sought after lower bound:\nMoving on to the upper bound, we show by induction over 1 = 1...L \u20141 that for any k \u20ac [N/4\"] and y \u20ac [ri].\nthe rank of [oe In, koJi,, 48 NO greater than cl-\u00ae, regardless of the chosen weight setting. For the base cas\u00e9\n, TO\n[OTe =o\n\n& la\u201d Ii, 4(k\u20141)+t\u00b040,4(e\u20141) +t\nol, . 7 Th ella, \u201c0,4(k-1)+t\nrank] oT nin < min {amin aebl \u201c0 roTT_\u00a2 (k\u20144) }\nply Mav oh\n[9 \u201cVedi = > a oF Io Th sA(k\u20141)4t-41\u20141,4(k\u20141) +4\n\nani\u201d\n(a! max L _rank[A'drs > min{ro, M}\u00b0\nald}, a\n-\u2019\u2122 is defined by the right hand side of this inequality, so our inductive hypotheses holds for 1 = 1. For 1 > 1:\nLy _ ML ty | A tte\nrank[\u00a2\u00b0 \"nate = rank (5 war Oe ONO aise ser aue\u2014ay se\n\n4\n\u00bbl\u20141,\nOle\n=1\n\nIA\n3\n8\n3\n>\n\n\u2014\u2122\n\nDi-aate-nsett-rau-nie)\n\nTi-1 T74 sll,\ny a=1 IL... rank| Vy aceaypeed\u2014ajace\u2014ay4e\n\n1,4(k\u2014-1)+t\n\nIA\nwhere we used rank sub-additivity in the second line, the rank-multiplicative property of the Kronecker product\nin the third line, and our inductive hypotheses for | \u2014 1 in the fourth line. Since the number rows and columns\nin [6\u00b0 Dn con , is Mllurl and Mlyuel respectively, we may incorporate these terms into the inequality, ob-\ntaining:\n4 I-1A(k\u2014-1)+t\npL . mini tellJield py\nrank|\u00a2 \u201cViaJie < min {Mu \u2014 c\nThe right hand side here is equal to oan by definition, so our inductive hypotheses indeed holds for all 1 =\n1...L \u2014 1. To establish the sought after upper bound on the rank of [A] 7,7, we recall that the latter is given\na=l t=1\nry out a series of steps similar to before, while making use of our inductive hypotheses for 1 = L \u2014 1\nSince [.A\u00a5] 1,7 has M\"'! rows and M'\u00a5! columns, we may include these terms in the inequality, thus reaching\nthe upper bound we set out to prove."}, {"section_index": "17", "section_name": "B SEPARATION RANK AND THE L* DISTANCE FROM SEPARABLE FUNCTIONS", "section_text": "Our analysis of correlations modeled by convolutional networks is based on the concept of separation rank\nWhen the separation rank of a function w.xr.t. a partition of its input is equal to 1, the functiot\ns separable, meaning it does not model any interaction between sides of the partition. We argued that the highe\nhe separation rank, the farther the function is from this situation, i.e. the stronger the correlation it induce\ndetween sides of the partition. In the current appendix we formalize this argument, by relating separation ranl\n0 the L? distance from the set of separable functions. We begin by defining and characterizing a normalize\nscale invariant) version of this distance (app. It is then shown (app. that separation rank provides ai\nipper bound on the normalized distance. Finally, a lower bound that applies to deep convolutional arithmeti\ncircuits is derived (app based on the lower bound for their separation ranks established in sec.\nTogether, these steps imply that our entire analysis, facilitated by upper and lower bounds on separation rank\nof convolutional arithmetic circuits, can be interpreted as based on upper and lower bounds on (normalized\n[,? distances from separable functions.\nFor a function h\u20ac L?((IR*))) (which is not identically zero), the normalized L? distance from the set of sepa-\nrable functions w.rt. (I. J), is defined as follows:\nD(h;I, J) =a. inf |e(xcay 52x) = glory iy )9! Oise 9)\nWAIL gen2caesyltly\n\u00bbL\u20141l,a\n\no Diuete-ra)\nL-1l,a\n\nd Ins stunts)\nrank[A\u00a5] 1,3 = rank (eo\n\na=1\n\n~\nLey ,;L\u2014l,a\n\n4\nTLL L\u2014l,a\n< > rank (\u00a9 [O\u00b0 Ira ede\u2014ae\na=1 1\nTL-1 774 L-l,a\n=) [],_, rere dre teas\no=1 t=1\nTL-17q4 bit\n< Vo.\n= o=1 t=1\n\n4 L-1,t\n= Thi {\u00a2\nmeasurable and square-integrable (i.e. belong to L? over the respective Euclidean space), and in app.\n\nalso make use of the fact that representation functions ( fg ,) of a convolutional arithmetic circuit can be regarded\nas linearly independent (see se Finally, for convenience, we now fix (J, J) \u2014 an arbitrary partition of [NV].\nSpecifically, J and J are disjoint subsets of [N] whose union gives [N], denoted by J = {i1,..., ij1)} with\niy <-+++ <ayp,and J = {7 sd} with jp << -e- < fy.\nnormalization (division by ||/||) admits scale invariance to D(h; I, J), and is of critical importance \u2014 withou\nit, rescaling h would accordingly rescale the distance measure, rendering the latter uninformative in terms o!\ndeviation from separability.\nIt is worthwhile noting the resemblance between D(h; I, J) and the concept of mutual information (see|Cover\n(2012) for a comprehensive introduction). Both measures quantify the interaction that a nor-\nmalized function ial induces between input variables, by measuring distance from separable functions. The\ndifference between the measures is threefold. First, mutual information considers probability density functions\n(non-negative and in L'), while D(h;I, J) applies to functions in L?. Second, the notion of distance in mu-\ntual information is quantified through the Kullback-Leibler divergence, whereas in D(h; I, J) it is simply the\nL? metric. Third, while mutual information evaluates the distance from a specific separable function \u2014 product\nof marginal distributions, D(h; I, J) evaluates the minimal distance across all separable functions.\nA(x1,...\n\nm m! , \u201d\n=v oye Au - bu(Xi,,--- Xin) Ow (irs + Xz\nEquality (1) here originates from the definition of L? norm. (2) is obtained by plugging in the expression in\n2] (3) is merely an arithmetic manipulation. (4) follows from the linearity of integration. (5) makes use\n'' An equivalent definition of D(h; I, J) is the minimal L\u201d distance between h/ ||h|| and a function separable\nw.r.t. (I, J). Accordingly, we may view D(h; J, J) as operating on normalized functions.\nB=\nsets of functions in L?((R*)!!), \u00a3?((R*)!\u00a5!) respectively. We refer to such expression as an orthonormal\nseparable decomposition of h, with A being its coefficient matrix. We will show that for any orthonormal\nseparable decomposition, D(h; I, J) is given by the following formula:\n\nwhere m and m\u2019 are positive integers, A is an m-by-m\u2019 real matrix, and {, }j71, {Oy Yan are orthonormal\no7(A)\n\nD(h; 1, J) i-aa \u201c+02 A)\nwhere 01(A) > ++\u00bb > Omin{m,m\u2019}(A) > 0 are the singular values of the coefficient matrix A. This implies\nthat if the largest singular value of A accounts for a significant portion of the spectral energy, the normalized\nL? distance of h from separable functions is small. On the other hand, if all but a fraction of the spectral energy\nis attributed to trailing singular values, h is far from being separable (D(h; I, J) is close to 1).\na first step In deriving eq-F?} we show that ||2]|\" = 01(A) + +++ + Onin {m,m\u2019} (A):\n\nal?\n\n[P2661 xs dx bey\n\n[ (OD.\n\npai ep! pla\n\nm m! 5 (\n> > Ay Apa | Ou (Xiy;-\na a Hype! fi, fi Xia s\n= 1 l=\n\nwy p=l\n\ney\nLn 4\n\n2\n\nwAppr Pu(Kirs--- Xi) ) Opt (Kir oo + X55)\n\nOp(Kiy,--- > Xij7) )on (Xj.,--- X55 )dx1-+-dxy\n\nPin (Kins Kip Op irs Xjjy ORL + ew\n\nXi 7)) Pp (Kay y+) Xijp) ER AX),\nAr iy\n[bobs tee Xi Oa (Xin te +X 5p) EX, + dX jy 5)\n\nJl hah fl wae\nApp 0 , otherwise 0 , otherwise\n\nBOD tobe imal) (24\nae\n\n| nh (x1, +, Xn )dx1-+-dxy\n[ (OD.\n[x\n\n>\n\n2\norl\n\nm\n\npy p=1\n\n1\nm \u00bb m\na rat\n\na\n_ ia\nwp =1\n\nBw pl =1\n\nm m!\nAe\n, pp!\n\npal tp =1\n\nA) eee\n\n4\n\n2\n\nwAppr Pu(Kirs--- Xi) ) Opt (Kir oo + X55)\n\n(iq, 6+) Xa), )O a (Kir y+ Xj, )Ox1- dx\n\nm!\n> wpa Ay Ap | Pu(Xiry +++) Xr) )Oa(Kia y+ \u00bb Xi) OXI, + -dXiy\n\nAr iy\n[bobs tee Xi Oa (Xin te +X 5p) EX, + dX jy 5)\n\nLl =f 1 pap\nAny Ange { 0 , otherwise hf 0 , otherwise\n\n\u20144,(A) (2.\n1 m m!\nIi se 19 (Kins X31) ~ in et One, GuKiny + Xi Ou (Kins - - Xj)\n\nTr Oi Op + bu (Kiy, +\u00bb X1))) O Kir Xia)\n+46(Kix 2-2; KHz) \u00b0 (Or ee Ou (Xr --.% 9)\nFO (Kins ey XH )S jr XH y))\ni\nmom\n\nsdf =O Anu one)? +E Ga, 0)\n\nw=lp'=1\n\n\\]noa, ew) = 9 (Kins Xiyy)G' (Kis\nh oxy) \u2014 , m m!\n| (x1,-.., xy) G(Kir, +++ Xijy) I (Xj1,--- x0) 20 ore Ann! = Ody!\n2\n2\nCoens 36m) = gins 2385 1))9\" ins Xa Il] 2 BCA) $+ rain ona y(AD\n2\n| = 03(A) +++ + omintmmy (A)\n\nint eon, 52) \u2014 gO in yi )O! ORs y))\ngel? ((R ITI)\n9 EL? ((R*)I41)\nRecall that we would like to derive the formula in eq. ssuming h is given by the orthonormal\nseparable decomposition in eq. Taking square root of the equalities established in eq. [24] and and\nplugging them into the definition of D(h; I, J) (eq.[21), we obtain the sought after result.\na\nof Fubini\u2019s theorem (see|Jones| . (6) results from the orthonormality of {@, }ji=1 and {\u00a2/, }7_1. (7) is\ntrivial computation. Finally, is an outcome of the fact that the squared Frobenius norm of a matrix, i.e. the\nsum of squares over its entries, is equal to the sum of squares over its singular values (see|Golub and Van Loan|\nTAN] O\\Si1\n\n[rj JF WASJao + Jy)\n\nm m! ; o\n= \u00bb sc \u00bb sya Ag!\u2019 Pu (Kegs ++ s Kip) Ow (X,\nm m! |\nen et On Qy! Pu\nm m! A\nen Doyen Aviat =\n\nXj a)\n\nXi Xi) ) Ow (Kary Xj 7)) \u2014 E(K1,-.- Xn)\n\nfaye Xi Our (%y Xj) y,) \u2014 ECx1,---, Xn)\n\n0h\") Oul\n2\n\nm m! , ; , 2\n=|. D> \u2014(Ansnt = mein)\u00bb by (Kins 6) Xp) Op! Kir KI )]] + WEG. xy)\n\nwal"}, {"section_index": "18", "section_name": "B.2 UPPER BOUND THROUGH SEPARATION RANK", "section_text": "We now relate D(h; I, J) \u2014 the normalized L? distance of he L?((R*)) from the set of separable functions\nwart. (I, J) (eq.[2 1p, to sep(h; I, J) \u2014 the separation rank of h w.r.t. (I, J) (eq. Specifically, we make use\nof the formula in eq.|23]to derive an upper bound on D(h; J, J) in terms of sep!\nAssuming h has finite separation rank (otherwise the bound we derive is trivial), we may express it\n: iy) (Xyas +++) X5py))\n; _ a#(A) =2\nD(h; I, J) i o3(A) +--+ 02 ayy R\n\nmin{m,m!}\nhe latter holds for any R \u20ac N that admits eq. so in particular we may take it to be minimal, i.e. to be equ\n\u00bb sep(h; I, JP, bringing forth the sought after upper bound:\nBy eq. low separation rank implies proximity (in normalized L? s\nthe inequality to translate the up\n\nse) to a separable function. We may use\nt bounds on separation ranks established for deep and shallow convolutional\narithmetic circuits (se: respectively), into upper bounds on normalized L? distances from separable\nfunctions. To completely frame analysis in terms of the latter measure, a translation of the lower bound on\nseparation ranks of deep convolutional arithmetic circuits (sec.[5-2) is also required. Eq. [27|does not facilitate\nsuch translation, and in fact, it is easy to construct functions h whose separation ranks are high yet are very close\n(in normalized L? sense) to separable functions. |'*| However, as we show in app. below, the specific lower\nbound of interest can indeed be translated, and our analysis may entirely be framed in terms of normalized L?\ndistance from separable functions."}, {"section_index": "19", "section_name": "B.3. LOWER BOUND FOR DEEP CONVOLUTIONAL ARITHMETIC CIRCUITS", "section_text": "Let hy\u20acL?((R*)%) be a function realized by a deep convolutional arithmetic circuit (fe He) with size-4\npooling windows and L = log, N hidden layers), i.e. is given by eq.|2} where fo,... fox, \u20acL?(R*) are\nlinearly independent representation functions, and A\u201d is a coefficient tensor of order N and dimension M\nin each mode, determined by the linear weights of the network ({a!7},,,, a\u201d) through the hierarchical de-\ncomposition in eq Rearrange eq.[2]by grouping indexes d;...dy in accordance with the partition (I, J):\n=1 4d;,...4;,,,=17 Gdn\n\nJu FG a)\n\n(TI, fou, 0%) (TE fea, Os)\no7(A) S 1\nTi(A) +--+ +02 (A) R\n\nmin{m,m:\nD(h:1,J) <,/1\u2014-\u2014+ \u2014\nsep(h; I, J)\netm = M\"!, and define the following mapping:\nwe (MI! [m], wdiyy ees diy) =14 D2 (di, \u2014 1)\nhy Gers.) = dere GAM Dayar Ou iss 1) Oy OG His)\n/ Ou (Kir ys Kipp) Op (Kary. +, Kipp) AX + Axi),\nral ral\n/v _ Foa,, oy OO) TT, t=1 ea, (a (Rie )AKa + AX:\nTl\u201d\nTEL, [P9000 ed fea, ay Oi\npal\nna (fos, (Hy? fea, wo)\nTE fb =a\nt=1 | 0 ,otherwise\n\ndi, (uw) = di, (fi) Vt \u20ac [|]\n, otherwise\n\n1\n0\nLl ,=f\n0 , otherwise\nIn sec. |5.2|we showed that the maximal separation rank realizable by a deep network is greater than or equal\n}8, where M, ro are the number of channels in the representation and first hidden layers (respec-\ntively), and S stands for the number of index quadruplets (sets of the form {4k-2\nWe now direct our attention to the special case where fo, . . . fo,, \u20acL7 (R\u00b0) \u2014 the network\u2019s representation func-\nions, are known to be orthonormal. The general setting, in which only linear independence is known, will\nde treated thereafter. Orthonormality of representation functions implies that @1 .. bmEL? ((R)!4!) are or-\nhonormal as well:\n[6x i 1083 ARG A\nral ral\n/v _ Foa,, oy OO) TT, a1 ea, (a) (Kix AX: AXA),\nTl\u201d\nTEL, [P9000 ed fea, ay Oi\npal\nna (fos, (Hy? fea, wo)\nII 1 di. (u) = di, (i)\nt=1 | 0 , otherwise\n\n1 ,di(u) = di (@) Ve \u20ac [IE]\n0 , otherwise\n1\n0\n\n=i\n, otherwise\n(1) and (4) here follow from the definition of inner product in L? space, (2) replaces $, and \u00a2j by\ntheir definitions, (3) makes use of Fubini\u2019s theorem (see ), (5) relies on the (temporary) as-\nsumption that representation functions are orthonormal, (6) is a trivial step, and (7) owes to the fact that\np+ (di, (H),.--, di, (1) is an injective mapping. A similar sequence of steps (applied to (bus Py) shows\nthat in addition to @1 ... dm, the functions \u00a2 ... \u00a2/,,,\u20acL?((R*)!\u00a5!) will also be orthonormal if fo,...fo,, are.\nWe conclude that if representation functions are orthonormal, eq.|29|indeed provides an orthonormal separable\ndecomposition of h,, and the formula in eq may be applied:\nD(hy; I, J) i o3((A\u00a5J,7) + +0?\n\ntI\n1\nsup D(h, I,J) = 1 \u2014- \u2014\u2014\u2014__\nfale}, sabe ula! Viyabey 3 2 min{ro, M}\nTurning to the general case, we omit the\n\nassumption that representation functions fo,...fo,,\u20acL7(R\u00b0\n\nare orthonormal, and merely rely on their linear independence. The latter implies that the dimension o!\nspan{ fo,---foy,} is M, thus there exist orthonormal functions y1...y \u20acL7(R\u00b0) that span it. Let F \u00a2\nR\u2122*\u2122 be a transition matrix between the bases \u2014 the matrix defined by y. = ean F.,a'fog, Ve \u20ac [M]. Sup.\npose now that we replace the original representation functions fo,...fo,, by the orthonormal ones y1.. .pz\n\nUsing the latter, the lower bound in eq. By\n\napplies, and there exists a setting for the linear weights of the\n\nnetwork \u2014 {a'7}),,,a\"Y, such that D(hy; I, J)>\\/1 \u2014 min{ro, M}-S. Recalling the structure of convolu\n\ntional arithmetic circuits (fig. [T]a)), one rea\n\nily sees that if we return to the original representation function:\n\nfo, .--fo,,, While multiplying conv weights in hidden layer 0 by F' (ie. mapping a\u00b0\u00b07-5 F'' a\u00b0\u201d), the overal\nfunction hy remains unchanged, and in particular D(hy; J, J)>\\/1 \u2014 min{ro, M}~\u00ae still holds. We con\nclude that the lower bound in eq./3Iapplies, even if representation functions are not orthonormal.\nTo summarize, we translated the lower bound from sec on the maximal separation rank realized by a deep\nconvolutional arithmetic circuit, into a lower bound on the maximal normalized L? distance from separable\nfunctions (eq. BI). This, along with the translation of upper bounds facilitated in app implies that the\nanalysis carried out in the paper, which studies correlations modeled by convolutional networks through the\nnotion of separation rank, may equivalently be framed in terms of normalized L? distance from separable\nfunctions. We note however that there is one particular aspect in our original analysis that does not carry through\nthe translation. Namely, in sec was shown that separation ranks realized by convolutional arithmetic\ncircuits are maximal almost always, i.e. for all linear weight settings but a set of (Lebesgue) measure zero. Put\ndifferently, for a given partition (J, J), the maximal separation rank brought forth by a network characterizes\nalmost all functions realized by it. An equivalent statement does not hold with the continuous measure of\nnormalized L? distance from separable functions. The behavior of this measure across the hypotheses space of\na network is non-trivial, and forms a subject for future research.\nWhen training convolutional arithmetic circuits, we followed the hyper-parameter choices made by|Sharir et al.\n. In particular, our objective function was the cross-entropy loss with no L? regularization (i.e. will\n\nweight decay set to 0), optimized using Adam with step-size a = 0.003 and moment\ndecay rates 8; = 62 = 0.9. 15000 iterations with batch size 64 (48 epochs) were run, with the step-size a\ndecreasing by a factor of 10 after 12000 iterations (38.4 epochs). We did not use dropout\n, as the limiting factor in terms of accuracies was the difficulty of fitting training data (as opposed to\noverfitting) \u2014 see fig\nFor training the conventional convolutional rectifier networks, we merely switched the hyper-parameters of\nAdam to the recommended settings specified in/Kingma and Ba (a = 0.001, 61 = 0.9, 82 = 0.999),\nand set weight decay to the standard value of 0.0001.\n'* To see this, note that with the specified weight setting, for every n \u20ac [N/4], ['\u2019'] 1... .71.\u00bb has one of twe\nforms: it is either a non-zero (row/column) vector, or it is a matrix holding 1 in several entries and 0 in all the\nrest, where any two entries holding 1 reside in different rows and different columns. The first of the two form:\nadmits a single non-zero singular value. The second brings forth several singular values equal to 1, possibly\naccompanied by null singular values. In both cases, all non-zero singular values of [\u00a2'\"] TynJi.n are equal tc\none another. Now, since [A\"] 7,7 = oN [oh Ty nsJi.n\u00bb and since the Kronecker product multiplies singular\nvalues (see|B 1970) ), we have that all non-zero singular values of [.A\u201d]];, are equal, as required.\n[N/4]) that are split by the partition (I, J). To prove this lower bound, we presented in app.\nsetting for the linear weights of the network ({a7}),,,a\u2019*\u201d) under which rank[.A\"]1,7 = min{ro, M}5.\nCareful examination of the proof shows that with this particular weight setting, not only is the rank of [.A\u00a5]7,7\nequal to min{ro, uM}, but also, all of its non-zero singular values are equal to one another. |'*] This implies\nthat 07 ([A%J1,7)/(o2 ([A\"I1,7) + +++ + Cin gmp LA\"]1,7)) = min{ro, M}~%, and since we currently\nassume that fo,...fo,, are orthonormal, eq applies and we obtain D(hy; J, J) = \\/1 \u2014 min{ro, M}~*.\nMaximizing over all possible weight settings, we arrive at the following lower bound for the normalized L?\n\ndistance from separable functions brought forth by a deep convolutional arithmetic circuit:\nOO\nIn this appendix we provide implementation details omitted from the description of our experiments in sec.[7]\n\nOur implementation, available online at pet E ee //github . com/HUJT\u2014 Deep/inductive-pooling\nis based on the SimNets branch (C E a 2014)). The latter realizes\n\nconvolutional arithmetic circuits in log-: = \u201cfor numerical stability."}, {"section_index": "20", "section_name": "> MORPHOLOGICAL CLOSURE", "section_text": "[t is not difficult to see that any pixel active in the original image is necessarily active in its closure. Moreover,\npixels that are originally inactive yet are surrounded by active ones will be turned on in the closure, hence\nthe effect of \u201cgap filling\u201d. Finally, we note that the particular sequence of steps described above represents the\nmost basic form of morphological closure. The interested reader is referred to[Haralick et al. 7) for a much\nmore comprehensive introduction.\nThe synthetic dataset used in our experiments s\n(blobs). One of the tasks facilitated by this dataset is the detection of morphologically closed blobs, i.e. of\nimages that are relatively similar to their morphological closure. The procedure we followed for computing the\nmorphological closure of a binary image is:\n1. Pad the given image with background (0 value) pixels\n\n. Morphological dilation: simultaneously turn on (set to 1) all pixels that have a (left, right, top ot\nbottom) neighbor originally active (holding 1)\n\n. Morphological erosion: simultaneously turn off (set to 0) all pixels that have a (left, right, top ot\nbottom) neighbor currently inactive (holding 0)\n\nD.. so oo"}]
Hyq4yhile
[{"section_index": "0", "section_name": "LEARNING INVARIANT FEATURE SPACES TO TRANS:\nFER SKILLS WITH REINFORCEMENT LEARNING", "section_text": "Abhishek Gupta'* Coline Devin'*, YuXuan Liu\u2019, Pieter Abbeel**, Sergey Levine"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "People can learn large repertoires of motor skills autonomously from their own experience. How-\never, learning is accelerated substantially when the learner is allowed to observe another persor\nperforming the same skill. In fact, human infants learn faster when they observe adults performing\na task, even when the adult performs the task differently from the child, and even when the adult\nperforms the task incorrectly (Meltzoff|{1999). Clearly, we can accelerate our own skill learning by\nobserving a novel behavior, even when that behavior is performed by an agent with different phys-\nical capabilities or differences in morphology. Furthermore, evidence in neuroscience suggests that\nthe parts of the brain in monkeys that respond to the pose of the hand can quickly adapt to instead\nrespond to the pose of the end-effector of a tool held in the hand {Uitte t al] 2008). This suggests\nthat the brain learns an invariant feature space for the task (e.g., reaching with a tool) that is inde-\npendent of the morphology of the limb performing that task. Mirror neurons also fire both when the\nanimal performs a task and when it observes another animal performing it\n2004| |Ferrari et al.][2005). Can we enable robots and other autonomous agents to transfer knowledge\n\nfrom other agents with different morphologies by learning such invariant representations?\nIn robotics and reinforcement learning, prior works have considered building direct isomorphisms\nbetween state spaces, as discussed in Section [2] However, most of these methods require specific\ndomain knowledge to determine how to form the mapping, or operate on simple, low-dimensiona\nenvironments. For instance, [Taylor et al.] find a mapping between state spaces by searching\nthrough all possible pairings. Learning stat state isomorphisms involves an assumption that the\ntwo domains can be brought into correspondence, which may not be the case for morphologically\n*These authors contributed equally to this work\n' UC Berkeley, Department of Electrical Engineering and Computer Science\n* OpenAl"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "People can learn a wide range of tasks from their own experience, but can also\nlearn from observing other creatures. This can accelerate acquisition of new skills\neven when the observed agent differs substantially from the learning agent in terms\nof morphology. In this paper, we examine how reinforcement learning algorithms\ncan transfer knowledge between morphologically different agents (e.g., different\nrobots). We introduce a problem formulation where two agents are tasked with\nlearning multiple skills by sharing information. Our method uses the skills that\nwere learned by both agents to train invariant feature spaces that can then be used\nto transfer other skills from one agent to another. The process of learning these\ninvariant feature spaces can be viewed as a kind of \u201canalogy making,\u201d or implicit\nlearning of partial correspondences between two distinct domains. We evaluate\nour transfer learning algorithm in two simulated robotic manipulation skills, and\nillustrate that we can transfer knowledge between simulated robotic arms with dif-\nferent numbers of links, as well as simulated arms with different actuation mech-\nanisms, where one robot is torque-driven while the other is tendon-driven.\ndifferent agents. Some aspects of the skill may not be transferable at all, in which case they must b\u00a2\nlearned from scratch, but we would like to maximize the information transferred between the agents\nIn this paper, we formulate this multi-agent transfer learning problem in a setting where two agents\nare learning multiple skills. Using the skills that have been already acquired by both agents, each\nagent can construct a mapping from their states into an invariant feature space. Each agent can ther\ntransfer a new skill from the other agent by projecting the executions of that skill into the invariant\nspace, and tracking the corresponding features through its own actions. This provides a well-shaped\nreward function to the learner that allows it to imitate those aspects of the \u201cteacher\u201d agent that are\ninvariant to differences in their morphology, while ignoring the parts of the state that cannot be\nimitated. Since the mapping from the state spaces of each agent into the invariant feature space\nmight be complex and nonlinear, we use deep neural networks to represent the mappings, and we\npresent an algorithm that can learn these mappings from the shared previously acquired skills.\nThe main contributions of our work are a formulation of the multi-skill transfer problem, a definition\nof the common feature space, and an algorithm that can be used to learn the maximally informative\nfeature space for transfer between two agents (e.g., two robots with different morphologies). To\nevaluate the efficiency of this transfer process, we use a reinforcement learning algorithm to transfer\nskills from one agent to another through the invariant feature space. The agents we consider may\ndiffer in state-space, action-space, and dynamics. We evaluate our transfer learning method in two\nsimulated robotic manipulation tasks, and illustrate that we can transfer knowledge between simu-\nlated robotic arms with different numbers of links, as well as simulated arms with different actuation\nmechanisms, where one robot is torque-driven while the other is tendon-driven."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Transfer learning has long been recognized as an important direction in robotics and reinforcement\nlearning (Taylor & Stone] (2009). [Konidaris & Barto] (2006) learned value functions on subsets of\nthe state representation that were shared between tasks, providing a shaping reward in the target\ntask. ) manually construct a function to map a Q-function from one Markov\ndecision process (MDP) to another. [Ammar & Taylor] (2012) manually define a common feature\nspace between the states of two MDPs, and use this feature space to learn a mapping between states.\nLater work by{Ammar et al.| uses unsupervised manifold alignment to assign pairings be-\ntween states for transfer. Like in our method, they aim to transfer skills between robots with different\nconfigurations and action spaces by guiding exploration in the target domain. The main difference\nfrom our work is that{(Ammar et al.| assume the presence of a feature mapping that provides\ndistances between states, and use these (hand designed) features to assign correspondences between\nstates in the different domains. In contrast, we assume that good correspondences in episodic tasks\ncan be extracted through time alignment, and focus on learning the feature mapping itself. Addition-\nally, we do not try to learn a direct mapping between state spaces but instead try to learn nonlinear\nembedding functions into a common feature space, as compared to linear mappings between state\nspaces learned in/Ammar et al.|\nlearning across linear time-invariant (LTI) systems through simple alignment based methods. AlI-\nthough this method is quite effective in enabling transfer in these systems, it does not apply to the\nhigher dimensional continuous control tasks we consider which may have non-linear dynamics, and\nmay not be LTI.\nIn machine learning, |Pan & Yang}(2010) provide an extensive survey on transfer learning whict\ntrain and test data b\n\naddresses the case of eing drawn from different distributions, as well as learn.\ning models that succeed on multiple, related tasks. |Ben-David & Schuller|(2003) derive theoretica\nguarantees on this sort of multitask learning and provide a formal framework for defining task re-\nlatedness. In deep learning, {Caruana (1997) show that a multitask network can leverage a sharec\nrepresentation of the input to learn multiple tasks more quickly together than separately.\nMore recent work in deep learning has also looked at transferring policies by reusing policy pa-\nusing either regularization or novel neural network architectures, though this work has not looked a\ntransfer between agents with structural differences in state, such as different dimensionalities. Ou\napproach is largely orthogonal to policy transfer methods, since our aim is not to directly transfer\nskill policy, which is typically impossible in the presence of substantial morphological difference:\nbut rather to learn a shared feature space that can be used to transfer information about a skill th\u00e9\nis shared across robots, while ignoring those aspects that are not shared. Our own recent work ha\nlooked at morphological differences in the context of multi-agent and multi-task learning (Devi\nlet al.| [2016), by reusing neural network components across agent/task combinations. In contrast t\nthat work, which transferred components of policies, our present work aims to learn common fez\nture spaces in situations where we have just two agents. We do not aim to transfer parts of policie\nthemselves, but instead look at shared structure in the states visited by optimal policies, which ca\nbe viewed as a kind of analogy making across domains.\nLearning feature spaces has also been studied in the domain of computer vision as a mechanism for\ndomain adaptation and metric learning. (2002) finds a linear transformation of the input\ndata to satisfy pairwise similarity contraints, while past work by (2005) used Siamese\nnetworks to learn a feature space where paired images are brought close together and unpaired\nimages are pushed apart. This enables a semantically meaningful metric space to be learned with\nonly pairs as labels. Later work on domain adaptation by{Tzeng et al.|(2015) and|Ganin et al.|(2016)\nuse an adversarial approach to learn an image embedding that is useful for classification and invariant\nto the input image\u2019s domain. We use the idea of learning a metric space from paired states, though\nthe adversarial approach could also be used with our method as an alternative objective function in\nfuture work."}, {"section_index": "4", "section_name": "3 PROBLEM FORMULATION AND ASSUMPTIONS", "section_text": "We formalize our transfer problem in a general way by considering a source domain and a target\ndomain, denoted Ds and Dr, which each correspond to Markov decision processes (MDPs) Ds =\n(.%s,@s,Ts,Rs) and Dr = (.Ar, Hr,Tr, Rr), each with its own state space .Y, action space Y,\ndynamics or transition function T, and reward function R. In general, the state and action spaces\nin the two domains might be completely different. Correspondingly, the dynamics Ts and Tr also\ndiffer, often dramatically. However, we assume that the reward functions share some structural\nsimilarity, in that the state distribution of an optimal policy in the source domain will resemble the\nstate distribution of an optimal policy in the target domain when projected into some common feature\nspace. For example, in one of our experimental tasks, Ds corresponds to a robotic arm with 3 links,\nwhile Dy is an arm with 4 links. While the dimensionalities of the states and action are completely\ndifferent, the two arms are performing the same task, with a reward that depends on the position of\nthe end-effector. Although this end-effector is a complex nonlinear function of the state, the reward\nis structurally similar for both agents."}, {"section_index": "5", "section_name": "3.1 COMMON FEATURE SPACES", "section_text": "We can formalize this common feature space assumption as following: if 2s(ss) denotes the state\ndistribution of the optimal policy in Ds, and (sr) denotes the state distribution of the optimal\npolicy in Dr, it is possible to learn two functions, f and g, such that p(f(ss)) = p(g(sr)) for ss ~ ms\nand sr ~ Zr. That is, the images of zs under f and zy under g correspond to the same distribution.\nThis assumption is trivially true if we allow lossy mappings f and g (e.g. if f(ss) = g(sr) =0 for all\nss and s7). However, the less information we lose in f and g, the more informative the shared feature\nwill be for the purpose of transfer. So while we might not in general be able to fully recover 2 from\nthe image of zs under f, we can attempt to learn f and g to maximize the amount of information\ncontained in the shared space."}, {"section_index": "6", "section_name": "3.2 LEARNING WITH MULTIPLE SKILLS", "section_text": "In order to learn the common feature space, we need examples from both domains. While both\nagents could in principle learn a common feature space through direct exploration, in this work we\ninstead assume that the agents have prior knowledge about each other, in the form of other skills that\nthey have both learned. This assumption is reasonable, since many practical use-cases of transfer\ninvolve two agents that already have competence in a range of simple settings, and wish to transfer\nthe competence of one agent in a new setting to another one. For example, we might wish to transfer\na particular cooking skill from one home robot to another one. in a setting where both robots have\nalready learned some basic manipulation behaviors that can allow us to build a common feature\nspace between the two robots. Humans similarly leverage their extensive prior knowledge to aid in\ntransfer, by recognizing limbs and hands and understanding their function.\nTo formalize the setting where the two agents can perform multiple tasks, we divide the state space\nin each of the two domains into an agent-specific state s, and a task-specific state Seny. A similar\npartitioning of the state variables was previously discussed by [Devin et al.| (2016), and is closely\nrelated to the agent-space proposed by [Konidaris] . For simplicity, we will consider a case\nwhere there are just two skills: one proxy skill that has been learned by both agents, and one test\nskill that has been learned by the source agent in the domain Ds and is currently being transferred\nto the target agent in domain Dr. We will use Dsy and Dr, to denote the proxy task domains for\nthe source and target agents. We assume that Ds and Dsp (and similarly Dr and Dr,) differ only\nin their reward functions and task-specific states, with the agent-specific state spaces .%, and action\nspaces being the same between the proxy and test domains. For example Ds, might correspond to\na 3-link robot pushing an object, while Ds might correspond to the same robot opening a drawer,\nand Dr, and Dr correspond to a completely different robot performing those tasks. Then, we can\nlearn functions f and g on the robot-specific states of the proxy domains, and use them to transfer\nknowledge from Ds to Dr.\nThe idea in this setup is that both agents will have already learned the proxy task, and we can com:\npare how they perform this task in order to determine the common feature space. This is a natura\nproblem setup for many robotic transfer learning problems, as well as other domains where multipl\ndistinct agents might need to each learn a large collection of skills, exchanging their experience anc\nlearning which information they can and cannot transfer from each other. In a practical scenario\neach robot might have already learned a large number of basic skills, some of which were learnec\nby both robots. These skills are candidate proxy tasks that the robots can use to learn their sharec\nspace, which one robot can then use to transfer knowledge from the other one and more quickly\nlearn skills that it does not yet possess."}, {"section_index": "7", "section_name": "3.3. ESTIMATING CORRESPONDENCES FROM PROXY SKILL", "section_text": "The proxy skill is useful for learning which pairs of agent-specific states correspond across bot\ndomains. We want to learn a pairing P, which is a list of pairs of states in both domains whict\nare corresponding. This is then used for the contrastive loss as described in Section Thess\ncorrespondences could be obtained through an unsupervised alignment procedure but in our methoc\nwe explore two simpler approaches exploiting the fact that the skills we consider are episodic."}, {"section_index": "8", "section_name": "3.3.2 ALTERNATING OPTIMIZATION USING DYNAMIC TIME WARPING", "section_text": "However, this alignment is sensitive to time based alignment and may not be very robust if the\nagents are performing the task at somewhat different rates. In order to address this, we formulate an\nalternating optimization procedure to be more robust than time-based alignment. This optimization\nalternates between learning a common feature space using currently estimated correspondences, and\nre-estimating correspondences using the currently learned feature space. We make use of Dynamic\nTime Warping (DTW) as described in , a well known method for learning correspon-\ndences across sequences which may vary in speed. Dynamic time warping requires a metric space\nto compare elements in the sequences to compute an optimal alignment between the sequences. In\nthis method, we initialize the weak time-based alignment described in the previous paragraph and\nuse it to learn a common feature space. This feature space serves as a metric space for DTW to\nre-estimate correspondences across domains. The new correspondences are then used as pairs for\nlearning a better feature space, and so on. This forms an Expectation-Maximization style approach\nwhich can help estimate better correspondences than naive time-alignment.\nThe first extremely simple approach we consider is to say that in such episodic skills, a reasonable\napproximate alignment can be obtained by assuming that the two agents will perform each task at\nroughly the same rate, and we can therefore simply pair the states that are visited in the same time\nstep in the two proxy domains."}, {"section_index": "9", "section_name": "4. LEARNING COMMON FEATURE SPACES FOR SKILL TRANSFER", "section_text": "In this section, we will discuss how the shared space can be learned by means of the proxy task\nWe will then describe how this shared space can be used for knowledge transfer for a new task, anc\nfinally present results that evaluate transfer on a set of simulated robotic control domains.\nWe wish to find functions f and g such that, for states ssp and sp along the optimal policies 7p\u2019\nand mrp*, f and g approximately satisfy p(f(ssp,-)) = p(g(srp,r)). If we can find the commor\nfeature space by learning f and g, we can optimize 7 by directly mimicking the distribution ove:\nFf (Ssp.r), Where ssp ~ Ts.\nthe proxy domains as described in The pairing P is a list of pairs of states (ssp,s7p) which are\ncorresponding across domains. As 7 and g are parametrized as neural networks, we can optimize\nthem using the similarity loss metric introduced by{Chopra et al.](2005):\n\nTo approximate the requirement rea = p(g(srp,r)), we assume a pairing P of states in\n3\nLim (Ssp,STp3 Of, Oe) = || f (Ssp.r3 \u00a2) \u2014 8 (ST p.r3 Oe) ||:\nLZiim(Ssp, Srp; Of; Og) = ||F (Sspr3 8\u00a2) \u2014 (Sr p.r3 8g) ||2\nHowever, as described in Section [3] if this is the only\nobjective for learning f and g, we can easily end up\nwith uninformative degenerate mappings, such as the one\nwhere f(ssp,r) = 8(Srp,r) = 0. Intuitively, a good pair of\nmappings f and g would be as close as possible to be-\ning invertible, so as to preserve as much of the informa-\ntion about the source domain as possible. We therefore\ntrain a second pair of decoder networks with the goal of\noptimizing the quality of the reconstruction of ssp, and\nSrp, from the shared feature space, which encourages\nf and g to preserve the maximum amount of domain-\ninvariant information. We define decoders Decs(f(ssp,-))\nand Decr(g(srp,-)) that map from the feature space back\nto their respective states. Note that, compared to conven-\ntional Siamese network methods, the weights between\nnetworks have different dimensional inputs. The objectiv:\nLAE, (SSp,r3 9, PDecs) = ||Ssp,r \u2014 Decs(f(Ssp.r3 Of); ODecs )||2:\nLE (STp,ri 9, Oecr ) = ||S7p,r \u2014 Decr (8 (S7Tp,r; 9g); Decr ) | |2\nwhere @pecs and Opec; are the decoder weights. We train the entire network end-to-end using back-\npropagation, where the full objective is\nmin Yi Lavg (Sspr3 fs ecg) +-Lakie (S7p,r3 es eer) + Lim(Sspyrs 57 pri Of; O)\n9,8 \u00a2, pecs Pde (ssp.s7p)\u20acP\nA diagram of this learning approach is shown in Figure]I]\nThe functions f and g learned using the approach described above establish an invariant space across\nthe two domains. However, because these functions need not be invertible, directly mapping from a\nstate in the source domain to a state in the target domain is not feasible.\nInstead of attempting direct policy transfer, we match the distributions of optimal trajectories across\nthe domains. Given f and g learned from the network described in Sectio and the distribution\nm; of optimal trajectories in the source domain, we can incentivize the distribution of trajectories\nin the target domain to be similar to the source domains under the mappings f and g. Ideally, we\nwould like the distributions p(f(ss,-)) and p(g(s7,-)) to match as closely as possible. However, it\nmay still be necessary for the target agent to learn some aspects of the skill from scratch, since not\ns. Ss;\n\nDec, Deo,\n\ni) ~Lap\n\nSs. Ss.\nFigure 1: The two embedding functions f\nand g are trained with a contrastive loss be-\ntween the domains, along with decoders that\noptimize autoencoder losses.\nTransfer (5 ri) = all f( 8\u00a7)30\u00a2) \u2014 (sy 8, 4) |l2\nWHICTC \u00b0Sr AS UIC ASCiUl-SPCCile Stale diOUs Ul OPUitidt POUCY il te SOUTS\n\nind sl. is the agent-specific state along the current policy that is being\nnain at time step \u00a2, and @ is a weight on the transfer reward that contro\no the overall task goal. In essence, this additional reward provides a\n\nwhich gives additional learning guidance in the target domain. In sparse re\nyerformance is highly dependent on directed exploration, and this addit\nrajectory distributions in the embedding space provides strong guidanc\nin tasks where the pairs mapping is imperfect, the trans-\n\u2018er reward may sometimes interfere with learning when the\narget domain policy is already very good, though it is usu-\nlly very helpful in the early stages of learning. We therefore\nnight consider gradually reducing the weight @ as learning\nsrogresses in the target domain. We use this technique for our\n\nsecond experiment, which learns a policy for a tendon-driven Figure 2: 7\nrm forming the"}, {"section_index": "10", "section_name": "5.1 METHODS USED FOR COMPARISON", "section_text": "In the following experiments, we compare our method with other methods. The simplest one, re-\nferred to as \u201cno transfer\u201d, aims to learn the target task from scratch. This method generally cannot\nsucceed in sparse reward environments without a large number of episodes. Table|1}shows that,\nwithout transfer, the tasks are not learned even with 3-4 times more experience.\nWe also compare to several linear methods, including random projections, canonical correlation\nanalysis (CCA), and unsupervised manifold alignment (UMA). Random projections of data have\nbeen found to provide meaningful dimensionality reduction (Hegde et al. {2008}. We assign f and g\n\nbe random projections into spaces of the same dimension, and transfer as described in Section|4.\n\nCCA (Hotelling}|1936) aims to find a basis for the data in which the source data and target data are\nmaximally correlated. We use the matrices that map from state space to the learned basis as f and g.\n\nUMA (Wang & Mahadevan| (2009). [Ammar etal] (2015b)) uses pairwise distances between states\nto align the manifolds of the two domains. These methods impose a linearity constraint on f and g\nwhich proves to limit the expressiveness of the embeddings. We find that using CCA to learn the\n\nembedding allows for transfer between robots, albeit without as much performance gained than if f\nand g are neural networks.\nWe also compare to kernel-CCA (KCCA) which uses a kernel matrix to perform CCA, allowing the\nmethod to use an implied non-linear feature mapping of the data. We test on several different kernels,\nincluding polynomial (quad), radial basis (rbf), and linear. These methods perform especially well\non transfer between different actuation methods, but which kernel to use for best performance is not\nconsistent between experiments. For example, although the quadratic kernel performs competitively\nwith our method for the tendon experiment, it does not work at all for our button pushing experiment.\nall intricacies will transfer in the presence of morphological differences. We therefore use a rein-\nforcement learning algorithm to learn zr, but with an additional term added to the reward function\nthat provides guidance via f(s\u00a2 ,). This term has following form:\nvhere s<. is the agent-specific state along the optimal policy in the source domain at time step f,\nFigure 2: The 3 and 4 link robots per.\nforming the button pressing task, which\nwe use to evaluate the performance\nof our transfer method. Each task is\ntrained on multiple conditions where\nthe objects start in different locations.\nOur experiments aim to evaluate how well common feature\n\nspace learning can transfer skills between morphologically different agents. The experiments were\nperformed in simulation using the MuJoCo physics simulator (Todorov et al.| 2012}, in order to\nexplore a variety of different robots and actuation mechanisms. The embedding functions f and g\nin our experiments are 3 layer neural networks with 60 hidden units each and ReLu non-linearities.\n\nThey are trained end-to-end with standard backpropagation using the ADAM optimizer (Kingma|\n& Ba 2015). Videos of our experiment will be available at/https://sites.google.com/\n\nsite/invariantfeaturetransfer/ For details of the reinforcement learning algorithm\n\nyead rafartn Annandiv A\n[he last method we compare with is \u201cdirect mapping\u201d which learns to directly predict sy from ss,\nnstead of mapping both into a common space. This is representative of a number of prior technique:\nhat attempt to put source and target domains into direct correspondence such as{Taylor et al.]\n\nn this method, we use the same pairs as we do for our method, estimated from prior experience, bu\nry to map directly from the source domain to the target domain. In order to guide learning usin;\nhis method, we pass optimal source trajectories through the learned mapping, and then penalize the\narget robot for deviating from these predicted trajectories. As seen in Figures [5]and|8} this methoc\nloes not succeed, probably because mapping from one state space to another is more difficult thar\nnapping both state spaces into similar embeddings. The key difference between this method anc\nyurs is that we map both domains into a common space, which allows us to put only the commot\nyarts of the state spaces in correspondence instead of trying to map between entire states acros:\nlomains.\nWe have also included a comparison between using time-based alignment across domains versus\nusing a more elaborate EM-style procedure as described in |3\n5.2. TRANSFER BETWEEN ROBOTS WITH DIFFERENT NUMBERS OF LINKS\nFigure 3: The 4-link robot pushing the button. Note that the reward function only tells the agent how far the\nbutton has been depressed, and provides no information to indicate that the arm should reach for the button.\nFigure 4: The 3 and 4 link robots performing each of the three proxy tasks we consider: target reaching, peg\ninsertion, and block moving. Our results indicate that using all three proxy tasks to learn the common feature\nspace improves performance over any single proxy task.\nIn our first experiment, we evaluate our method on transferring information from a 3-link robot to\na 4-link robot. These robots have similar size but different numbers of links and actuators, making\nthe representation needed for transfer non-trivial to learn. In order to evaluate the effectiveness of\nour method, we consider tasks with sparse or delayed rewards, which are difficult to learn quickly\nwithout the use of prior knowledge, large amounts of experience, or a detailed shaping function\nto guide exploration. For transfer between the 3 link and 4 link robots, we evaluate our method\non a button pressing task as shown in Figures [2] and [3] The goal of this task is to reach through\na narrow opening and press the white button to the red goal marker indicated in the figure. The\ncaveat is that the reward signal tells the arms nothing about where the button is, but only penalizes\ndistance between the white button and the red goal. Prior work has generally used well-shaped\nreward functions for tasks of this type, with terms that reward the arm for approaching the object of\ninterest (Lillicrap et al.| 2015}|Devin et al.| 2016). Without the presence of a directed reward shaping\nguiding the arm towards the button, it is very difficult for the task to be performed at all in the target\ndomain, as seen from the performance of learning from scratch with no transfer (\u201cbaseline\u2019\u2019) in the\ntarget domain in Figure[5] This is indicative of how such a task might be learned in the real world,\nwhere it is hard to provide anything but very sparse feedback by using a sensor on the button.\nFor this experiment, we compare the quality of transfer when using different proxy tasks: reaching\na target, moving a white block to the red goal, and inserting a peg into a slot near the robot, as\nshown in Figure [4] These tasks are significantly easier than the sparse reward button pressing task.\nCollecting successful trajectories from the proxy task, we train the functions f and g as described in\nSection |4] Note that the state in both robots is just the joint angles and joint velocities. Learning <\nsuitable common feature space therefore requires the networks to understand how to map from join!\nangles to end-effectors for both robots.\nWe consider the 3-link robot pressing the button as the source domain and the 4-link robot pressing\nthe button as the target domain. We allow the domain with the 3-link robot to have a well shapec\ncost function which has 2 terms: one for bringing the arm close to the button, and one for the\ndistance of the button from the red goal position. The performance of our method is shown ir\nFigure [5] The agent trained with our method performs more directed exploration and achieves ar\nalmost perfect success rate in 7 iterations. The CCA method requires about 4 times more experience\nto reach 60% success than our method, indicating that using deep function approximators for the\nfunctions f and g which allows for a more expressive mapping than CCA. Even with kernel CCA\nthe task is not able to be performed as well as our method. Additionally the UMA and randon\nprojections baselines perform much worse than our method. We additionally find that using the EM\nstyle alignment procedure described in also allows us to reach perfect formance as shown it\nFigure|5| Investigating this method further will be the subject of future work.\nLearning a direct mapping between states in both domains only provides limited transfer because this\napproach is forced to learn a mapping directly from one state space to the other, even though there\nis often no complete correspondence between two morphologically different robots. For example\nthere may be some parts of the state which can be put in correspondence, but others which cannot\nOur method of learning a common space between robots allows the embedding functions to only\nretain transferable information.\nPercent Success\n& 8 & & &\n\nPercent Success\n\n2\n\nen\n\u2018Ours: AIl proxies. Jointly optimized P\n+++ Random Projection\n\u2014 No transfer\n\n~ UMA\n\u2014 birect mapping\n\nKCCA, quadratric\n\u2018Ours: All proxies\n\nPercent Success\n\nao\n\n08\n\n05\n\n02\n\noo\n\nRECA, REF \u201cOurs: All proxies, jointly optimized P\nNo transfer No transfer\nKCCA, linear \u2018Ours: Reach proxy\n\nKCCA, quadratrie|\nOurs: All proxies\ncca\n\n\u2018Ours: Push proxy\nurs: All proxies\n\u2018Ours: Peg proxy\n\u201cpush,\u201d and \u201creach\u201d proxy ablations indicate the performance when using embedding functions learned from\nthose proxy tasks. The embedding improves significantly when learned from all three proxy tasks, indicating\nthat our method benefits from additional prior experience.\nTable 1: Maximum success rate of \u201cno transfer\u201d method over 75 iterations of training shown for the 3 tasks\nconsidered in Sections|5.2] and Because the target environments suffer from sparse rewards, thi.\nmethod is unable to learn the tasks with a tractable amount of data.\nIn order to illustrate the ability of our method to transfer across vastly different actuation mechanisms\nand learn representations that are hard to specify by hand, we consider transfer between a torque\ndriven arm and a tendon driven arm, both with 3 links. These arms are pictured in Figure [6] The\ntorque driven arm has motors at each of its joints that directly control its motion, and the state\nincludes joint angles and joint velocities. The tendon driven arm, illustrated in Figure [6] uses three\ntendons to actuate the joints. The first tendon spans both the shoulder and the elbow, while the\nsecond and third control the elbow and wrist individually. The last tendon has a variable-length\nlever arm, while the first two have fixed-length lever arms, corresponding to tendons that conform\nto the arm as it bends. This coupled system uses tendon lengths and tendon velocities as the state\nrepresentation, without direct access to joint angles or end-effector positions.\nThe state representations of the two robots are dra-\nmatically different, both in terms of units, dimension-\nality, and semantics. Therefore, learning a suitable\ncommon feature space represents a considerable chal-\nlenge. In our evaluation, the torque driven arm is the\nsource robot, and the tendon driven arm is the target\nrobot. The task we require both robots to perform is a\nblock pulling task indicated in Figure|7| This involves\npulling a block in the direction indicated, which is non-\ntrivial because it requires moving the arm under and\naround the block, which is restricted to only move in\nthe directions indicated in Figure 6] With random ex-\nploration, the target robot is unable to perform directed\nexploration to get the arm to actually pull the block in\nthe desired direction, as shown in Figure|8]\nWE US ONE PLOAY tas Ub Ue CAPCHITTICIL, WiliCtL ill~\n\nvolves both arms reaching to various locations. With embedding functions f and g trained on of\ntimal trajectories from the proxy task, we see that the transfer reward from our method enables th\ntask to actually be performed with a tendon driven arm. The baseline of learning from scratcl\nwhich again corresponds to attempting to learn the task with the target tendon-driven arm fror\nscratch, fails completely. The other methods of using CCA, and learning a direct mapping are abl\nto achieve better performance than learning from scratch but learn slower. Kernel CCA with th\nquadratic kernel does competitively with our method but in turn performed very poorly on the bu\nton task so is not very consistent. Additionally, the random projection and UMA baselines perforr\nquite poorly. The performance of the EM style alignment procedure is very similar to the standar\ntime based alignment as seen in Figure[8| likely because the data is already quite time aligned acros\nthe domains. These results indicate that learning the common feature subspace can enable substar\ntially accelerated learning in the target domain, and in fact can allow the target agent to learn a tas\nthat it fails to learn without any transfer rewards, and performs better than alternative methods.\nFigure 7: The tendon-driven robot pulling the block. Note that the reward function only tells the agent how fa\nthe block is from the red goal and provides no information to indicate that the arm should reach around th\nblock in order to pull it. The block is restricted to move only towards the red goal, but the agent needs to mov\nunder and around the block to pull it."}, {"section_index": "11", "section_name": "5.4 TRANSFER THROUGH IMAGE FEATURES", "section_text": "A compelling use-case for learned common embeddings is in learning vision-based policies. In\nthis experimental setup, we evaluate our method on learning embeddings from raw pixels instead\nof from robot state. Enabling transfer from extra high dimensional inputs like images would allow\nsignificantly more natural transfer across a variety of robots without restrictive assumptions about\nfull state information.\nWe evaluate our method on transfer across a 3-link and a 4-link robot as in Section but use\nimages instead of state. Because images from the source and target domains are the same size and\nthe same \u201ctype\u201d, we let g = f . We parametrize f as 3 convolutional layers with 5x5 filters and\nno pooling. A spatial softmax is applied to the output of the third layer such\nthat f outputs normalized pixel indices of feature points on the image. These \u201cfeature points\u201d form\nthe latent representation that we compare across domains. Intuitively the common \u201cfeature points\u201d\nembeddings should represent parts of the robots which are common across different robots.\nEmbeddings between the domains are built using a proxy task of reaching to a point, similar to\nthe one described in the previous experiments. The test task in this case is to push a white block\nFigure 6: The top images show the source and\ntarget domain robots: the robot on the left is\ntorque driven at the joints and the one on the\nright is tendon driven. The tendons are high-\nlighted in the image; the green tendon has a\nvariable-length lever arm, while the yellow ten-\ndons have fixed-length lever arms. Note that the\nfirst tendon couples two joints. The bottom im-\nages show two variations of the test task.\ns+ Random Projection\n\u2014 Notransfer\n\nss UMA\n\n\u2014 Direct mapping\n\n= + KCCA, quadratric\n\n== Ours: Timestep-alignment\n\n== Ours: jointly optimized P\n== KCCA, RBF\n\nUMA\n\n\u2014 Notransfer\n\nKCCA, linear\n\n= + KCCA, quadratric\n\n= + Ours: Timestep-alignment\ncA\nFigure 8: Performance of tendon-controlled arm on block pulling task. While the environment\u2019s reward is\ntoo sparse to succeed in a reasonable time without transfer, using our method to match feature space state\ndistributions enables faster learning. Using a linear embedding or mapping directly from source states to\ntarget states allows for some transfer. Optimizing over P instead of assuming time-based alignment does no\nhurt performance. KCCA with quadratic kernel performs very well in this experiment, but not in experiment 1.\nto a red target as shown in Figure Pal which suffers from sparse rewards because the reward only\naccounts for the distance of the block from the goal. Unless the robot knows that it has to touch the\nblock, it receives no reward and has unguided exploration. As shown in Figure [9b] our method is\nable to transfer meaningful information from source to target robot directly from raw images and\nsuccessfully perform the task even in the presence of sparse rewards.\n(a) The 3-link robot demonstrating\nthe task. The yellow triangles mark\nthe locations of the feature points\noutput by f applied to the image pix-\nels. We then use the feature points to\ntransfer the skill to the 4-link robot.\nPercent Success\n\n10 6\nIteration\n== Random Projection\nKCCA, RBF\n\u2014 No transfer\nKCCA, linear\n== KCCA, quadratric\nOurs: Timestep-alignment\n(b) Performance of 4-link robot on block pushing task for transfe1\nusing raw images. We transfer from the 3-link robot by learning\na feature space from raw pixels of both domains, enabling effec-\ntive faster learning. Random projections and linear kernel-CCA\nhave some success in transfer. The baseline is unable to succeed\nbecause of the reward signal is too sparse without transfer.\nWe presented a method for transferring skills between morphologically different agents using in-\nvariant feature spaces. The formulation of our transfer problem corresponds to a setting where two\nagents (e.g. two different robots) have each learned a collection of skills, with some skills known\nto just one of the agents, and some shared by both. A shared skill can be used to learn a space\nthat implicitly brings the agents into correspondence, without assuming that an explicit state space\nisomorphism can be constructed. By then mapping into this space a skill that is known to only one\nof the agents, the other agent can substantially accelerate its learning of this skill by transferring\nthe shared structure. We present an algorithm for learning the shared feature spaces using a shared\nproxy task, and experimentally illustrate that we can use this method to transfer manipulation skills\nbetween different simulated robotic arms. Our experiments include transfer between arms with dif-\nferent numbers of links. as well as transfer from a torque-driven arm to a tendon-driven arm.\nA promising direction for future work is to explicitly handle situations where the two (or more)\nagents must transfer new skills by using a large collection of prior behaviors, with different degrees\nof similarity between the agents. In this case, constructing a shared feature space involves not only\nmapping the skills into a single space, but deciding which skills should or should not be combined.\nFor example, a wheeled robot might share manipulation strategies with a legged robot, but should\nnot attempt to share locomotion behaviors.\nIn a large-scale lifelong learning domain with many agent and many skills, we could also consider\nusing our approach to gradually construct more and more detailed common feature spaces by trans-\nferring a skill from one agent to another, using that new skill to build a better common feature\nspace, and then using this improved feature space to transfer more skills. Automatically choosing\nwhich skills to transfer when in order to minimize the training time of an entire skill repertoire is an\ninteresting and exciting direction for future work."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Haitham Bou Ammar and Matthew E. Taylor. Reinforcement learning transfer via common sub-\nspaces. In Adaptive and Learning Agents: International Workshop, 2012.\nHaitham Bou Ammar, Eric Eaton, Paul Ruvolo, and Matthew Taylor. Unsupervised cross-domai\ntransfer in policy gradient reinforcement learning via manifold alignment. In AAA/J Conferenc\non Artificial Intelligence, 2015a.\nHaitham Bou Ammar, Eric Eaton, Paul Ruvolo, and Matthew E Taylor. Unsupervised cross-domain\ntransfer in policy gradient reinforcement learning via manifold alignment. In Proc. of AAAI,\n2015b.\nAlexander Braylan, Mark Hollenbeck, Elliot Meyerson, and Risto Miikkulainen. Reuse of neural\nmodules for general video game playing. CoRR, abs/1512.01537, 2015.\nRich Caruana. Multitask learning. Machine Learning, 1997.\nSumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with\napplication to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005.\nIEEE Computer Society Conference on, volume 1, pp. 539-546. IEEE, 2005.\nHarold Hotelling. Relations between two sets of variates. Biometrika, 28, 1936\nSergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search unde:\nunknown dynamics. In Advances in Neural Information Processing Systems, 2014.\nSergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo-\nmotor policies. Journal of Machine Learning Research, 17:1\u201440, 2016.\nWeiwei Li and Emanuel Todorov. Iterative linear quadratic regulator design for nonlinear biologica\nmovement systems. In JCINCO (1), 2004.\nTimothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa,\nDavid Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. CoRR,\nabs/1509.02971, 2015.\nAndrew Meltzoff. Born to learn: What infants learn from watching us. Skillman, NJ: Pediatri\nInstitute Publication, 1999.\nMeinard Miiller. Dynamic time warping. Information retrieval for music and motion, pp. 69-84\n2007.\nKaizad V Raimalwala, Bruce A Francis, and Angela P Schoellig. A preliminary study of transfer\nlearning between unicycle robots. In 20/6 AAAI Spring Symposium Series, 2016.\nGiacomo Rizzolatti and Laila Craighero. The mirror neuron system. Annual Review of Neuro-\nscience, 27:169-192, 2004.\nAndrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick\nKoray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. CoRR\nabs/1606.04671, 2016a.\nMatthew E. Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey\nJournal of Machine Learning Research, 10:1633\u20141685,. 2009.\nMatthew E. Taylor, Nicholas K. Jong, and Peter Stone. Transferring instances for model-based\nreinforcement learning. In Proceedings of the European Conference on Machine Learning and\nPrinciples and Practice of Knowledge Discovery in Databases (ECML PKDD), 2008.\nM. A. Umilta, L. Escola, I. Intskirveli, F. Grammont, M. Rochat, F. Caruana, A. Jezzini, V. Gallese\nand G. Rizzolatti. When pliers become fingers in the monkey motor system. Proceedings of th\nNational Academy of Sciences, 105(6):2209\u20142213, 2008.\nGeorge Konidaris and Andrew Barto. Autonomous shaping: knowledge transfer in reinforcement\n\nToe fe\nMatthew Taylor, Peter Stone, and Yaxin Liu. Transfer learning via inter-task mappings for tempora\ndifference learning. Journal of Machine Learning Research, 8(1):2125\u20142167, 2007.\nChang Wang and Sridhar Mahadevan. Manifold alignment without correspondence. In JJCA.\nvolume 2, pp. 3, 2009."}, {"section_index": "13", "section_name": "7.1. REINFORCEMENT LEARNING WITH LOCAL MODELS", "section_text": "Although we can use any suitable reinforcement learning algorithm for learning policies, in thi\nwork, we use a simple trajectory-centric reinforcement learning method that trains time-varyin\nlinear-Gaussian policies (Levine & Abbeel] |2014). While this method produces simple policies, |\nis very efficient, making it well suited for robotic learning. To obtain robot trajectories for trainin\ntasks and source robots, we optimize time-varying linear-Gaussian policies through a trajectory\ncentric reinforcement learning algorithm that alternates between fitting local time-varying linea\ndynamics models, and updating the time-varying linear-Gaussian policies using the iterative linear\nquadratic Gaussian regulator algorithm (iLQG) (Li & Todorov] |2004). This approach is simple an\nefficient, and is typically able to learn complex high-dimensional skills using just tens of trials\nmaking it well suited for rapid transfer. The resulting time-varying linear-Gaussian policies ar\nparametrized as p(u;|x:) = (Kix; + ky,C;) where K;, k;, and C; are learned parameters. Furthe\n\ndetails of this method are presented in prior work (Levine & Abbeel| 2014).\nWe use the same reinforcement learning algorithm to provide solutions in the source domain Ds,\nthough again any suitable reinforcement learning method (or even human demonstrations) could be\nused instead. To evaluate the ability of our method to provide detailed guidance through the transfer\nreward transfer, We use relatively sparse reward functions in the target domain Dr, as discussed\nbelow. To generate the original skills in the source domain Ds and in the proxy domains Ds, and\nDrp, we manually designed the appropriate shaped costs to enable learning from scratch to succeed,\nthough we note again that our method is agnostic to how the source domain and proxy domain skills\nare acquired."}]
Bk3F5Y9lx
[{"section_index": "0", "section_name": "EPITOMIC VARIATIONAL AUTOENCODER", "section_text": "Serena Yeung *\nStanford University\nStanford University\n{feifeili}@cs.stanford.edu\nIn this paper, we propose epitomic variational autoencoder (eVAE), a probabilis-\ntic generative model of high dimensional data. eVAE is composed of a number!\nof sparse variational autoencoders called \u2018epitome\u2019 such that each epitome par-\ntially shares its encoder-decoder architecture with other epitomes in the composi-\ntion. We show that the proposed model greatly overcomes the common problem\nin variational autoencoders (VAE) of model over-pruning. We substantiate thai\neVAE is efficient in using its model capacity and generalizes better than VAE, by\npresenting qualitative and quantitative results on MNIST and TFD datasets."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The recently proposed variational autoencoder (VAE) (Kingma & Welling, 2014) is an example of\none such generative model. VAE pairs a top down generative model with a bottom up recognition\nnetwork for amortized probabilistic inference. Both networks are jointly trained to maximize a\nvariational lower bound on the data likelihood. A number of recent works use VAE as a modeling\nframework, including iterative conditional generation of images (Gregor et al., 2015) and conditional\nfuture frame prediction (Xue et al., 2016).\nA commonly known problem with the VAE lower bound is that it is known to self-prune or un-\nder utilize the model\u2019s capacity (Mackay, 2001). This can lead to poor generalization. A common\napproach to alleviate this problem is to resort to optimization schedules and regularization tech-\nniques (Bowman et al., 2015; Kaae Sonderby et al., 2016) that trade-off two competing terms, latent\ncost and data reconstruction, in the bound. Fig. 1 provides a quick insight into this problem of\nover-pruning and how commonly used regularization techniques may not be sufficient. Detailed\ndiscussion is provided in \u00a7 2.1.\nIn this paper, we take a model-based approach to directly address this problem. We present an exten-\nsion of variational autoencoders called epitomic variational autoencoder (Epitomic VAE, or eVAE\nfor short) that automatically learns to utilize its model capacity more effectively, leading to bette:\ngeneralization. Consider the task of learning a D-dimensional representation for the examples ir\na given dataset. The motivation for our model stems from the hypothesis that a single example ir\nthe dataset can be sufficiently embedded in a smaller k-dimensional (K < D) subspace of D.\nHowever, different data points may need different subspaces, hence the need for D. Sparse coding\nmethods also exploit a similar hypothesis. Epitomic VAE exploits sparsity using an additional cat.\negorical latent variable in the encoder-decoder architecture of the VAE. Each value of the variable\nactivates only a contiguous subset of latent stochastic variables to generate an observation. This\n\u201cWork done during an internship at Facebook AI Research.\nAnitha Kannan & Yann Dauphin\nFacebook AI Research\nfakannan, ynd}@fb.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Unsupervised learning holds the promise of learning the inherent structure in data so as to enable\nmany future tasks including generation, prediction and visualization. Generative modeling is an\napproach to unsupervised learning wherein an explicit stochastic generative model of data is defined,\nsuch that independent draws from this model are likely to produce the original data distribution,\nwhile the learned latent structure itself is useful in prediction, classification and visualization tasks.\nThe rest of the paper is organized as follows. We first describe variational autoencoders and math-\nematically show the model pruning effect in \u00a7 2. We then present our epitomic VAE model in \u00a7 2\nthat overcomes these shortcomings. Experiments showing qualitative and quantitative results are\npresented in \u00a7 4. We finally provide more general context of our work in the related work in \u00a7 5, anc\nconclude with discussions.\nThe generative model (decoder) of a VAE consists of first generating a D-dimensional stochastic\nvariable z drawn from a standard multivariate Gaussian\npo(x|z) = N(x; f(z); exp(fa(z)))\nGiven a dataset X of T\u2019i.i.d samples, the model is learned such that it maximizes the likelihood of\nthe parameters to have generated the data, p(X|0). This maximization requires marginalizing the\nunobserved z. However, computing p(z|x) is intractable due to dependencies induced between the\n; when conditioned on x.\nVariational autoencoders, as the name suggests, use variational inference to approximate the exact\nposterior with a surrogate parameterized distribution. However, instead of having separate parame-\nters for the posterior distribution of each observation, VAE amortizes the cost by learning a neural\nnetwork with parameters \u00a2 that outputs the posterior distribution of the form qg(z|x) = [], 9(zi|x).\nThis results in the lower bound given by\nlogpo(X) = Solow [ pol\")\nt=1 Z\n\nIV\n\nT\nDY Eygcaineoy los pe |2)] = KL(ao(zlx) || p@))\nt=1\nCrae = YF oe (x|z)] +L KL al 2ifx) || plz i)\n\nt=1 i=1\nOf particular interest is the KL term. Since the KL term is the sum of independent contributions from\neach dimension d of D, it provides unduly freedom for the model in how it minimizes this term. In\nparticular, the model needs to only ensure that the overall KL term is minimized, on average, and\nnot per component wise. The easiest way for the model to do this is to have a large number of\ncomponents that satisfies the KL term effectively, by turning off the units so that the posterior for\nthose units becomes the same as the prior!. This effect is quite pronounced in the early iterations of\n\u201cSince log variance is modeled using the neural network, turning it off will lead to a variance of 1.\nenables learning multiple shared subspaces such that each subspace specializes, and also increases\nthe use of model capacity (Fig. 4), enabling better representation. The choice of the name Epit-\nomic VAE comes from the fact that multiple miniature models with shared parameters are trained\nsimultaneously.\np(z) = N(z;0; I)\nVAE is trained with standard backpropagation using minibatch gradient descent to minimize the\nnegative of the lowerbound\nFigure 1: Sorted activity level of latent units and corresponding generations on MNIST, for a 50-d VAE with a\nhidden layer of 500 units. Shown for varying values of the KL weight A. When A = 1, only 30 units are active.\nAs 2 is decreased, more units are active; however generation does not improve since the model uses the capacity\nto model increasingly well only regions of the posterior manifold near training samples (see reconstructions in\nFig. 8).\n& & &\nBARS\nAAAS\nBAAS\nBAAS\nBAS\nBAAS\nBAAS\nBASS\nBAAS\nBAAS\nBASS\n\nwyed\nBmOr\nYh g\nrrxan\na BRO)\nONTO\nBees\nHoye\nROAX~D\nean\nwoTKWNY\nAOS\n\nVOTH\ngd QX4\nPUAN\nRANT\nNNO\na tev\nHAs\nwrwcd\nROLLS\nNea\nNob\n\nDead units\n\nActive units\n\nAll units\nFigure 2: Only active units contribute to generation, whereas units that have \u201cdied\u201d have no effect. Shown for\na50-d VAE with \\ = 1.\ntraining: the model for log p(x|z) is quite impoverished and hence the easiest way to improve the\nbound is by turning off the KL terms. However, once the units have become inactive, it is almost\nimpossible for them to resurrect, and hence the full capacity of the model is not utilized.\nA quantity that is useful in understanding this effect, is the activity level of a unit. Following Burd\net al. (2015), we define a unit to be used, or \u201cactive\u201d, if Ay, = Cove (Eyrg(ulx)[u}) > 0.02.\nA commonly used approach to overcome this problem is to use a trade-off between the two terms\nusing parameter \\ so that the cost is\nC= Ey, (aim llog p(x{z)] +) > KL (gael) || P=)\n\ni=l\nFig. 1 shows the effect of \\ on unit activity and generation, with A = 1 being the correct objective\nto optimize. While tuning down \\ increases the number of active units, samples generated from\nthe model are still poor. Fig. 2 shows generation using all units, active units only, and dead units\nonly, for \\ = 1. The model spends its capacity in ensuring that reconstruction of the training set is\noptimized (reconstruction visualizations are shown in \u00a7 8.1), at the cost of generalization. This has\nled to more sophisticated schemes such as using an annealed optimization schedule for A (Bowman\net al., 2015; Kaae Sonderby et al., 2016) or enforcing minimum KL contribution from subsets of the\nlatent units (Kingma et al., 2016).\nIn this paper, we present a model based approach called \u201cepitomic variational autoencoder\u201d to ad-\ndress the problem of over pruning."}, {"section_index": "3", "section_name": "3. MODEL", "section_text": "We propose epitomic variational autoencoder (eVAE) to overcome the shortcomings of VAE by\nenabling more efficient use of model capacity to gain better generalization. We base this on the\nobservation that while we may need a D-dimensional representation to accurately represent every\nexample in a dataset, each individual example can be represented with a smaller /\u2019-dimensional\nsubspace. As an example, consider MNIST with its variability in terms of digits, strokes and thick-\nPD WH Sa >\nAy Eg a DR HAE Be\nPATHOL GE tH\nEe otek ror\nBowes\nadeees SoG\nens Smee\nSRSOnKan we\nWE Dv MN\nWEES VN Fie\n\nPRG SMOOCIS\nTN ty BO ON\nPSTBIOIAM LOD\nSu Dy Seb owes\nBins sh md\n% AyD Ged >\nMO te DK OD\nWHE IMS WA &\nGALGATEN TES\nDRRS HOWE Os\n\nKEN DV OKIE\nWHO MOY AHY\nEHQAELHVOS\nTERA D POs\nAKO YTS HON P\nWREHRAA SHG\nNAAT OST ONE\nWEKOALOKES\nHPN Ped Po A\nKP OBKOK ANY\n\n= 0.2\n\nSoococceengooonoetoos\n\n30\n\nrn)\n\n50\n\n10\n\n20\n\n10\n\nUnit\nOS BM SERA DD\nAy by at ob OR Banks Be\nPAT Ha OS Gee\nSe oeoks tor\nBowe ParT wee\nMASS VAWS DG\nPMS ewe Sake\nA DN oe HD\nREM Dem ge\nESS Ow Ne Fb\n\nPRG SMOOCIS\nTN ty BO ON\nBAAR LOd\nSND Oboes\nRemon P send\nMIA Dow Gord d\nMO te DK OD\nWHE IMS WA &\nSALA ENOTES\nOKA OM ks\n\nONAN DV OIE\nWH NAMB AbhY\nFAQAELHVDS\nTERA D POs\nAKO TS HONE\nWREHRAA SHG\nNAAT OST ONE\nAE K@ECObKeES\nHPN Ped Po A\nKP OBKOK ANY\n\n\u201d\n\n\u2019\n%\n\nIl\n\u00ab\n\n30\n\ncoseegooonoetooe\n10\nAM ee oe\nANN ee ee\nAVN NY \u00a38 08 28 20 Oo\nAVY \u00a9 08 00 00 60 80 Oo\nSW ev evca ca 09 00 00 00 80\nSS 0\u00a5 04 09 09 09 09 @9 G0 00\n(V \u00a94 09 09 69 00 60 60 Go OF\n(N09 09 00 00 00 00 G0 Go OF\n(109 09 09 00 00 00 Go Go O\n09 00 00 00 60 Oc G0 Go bo Oe\n\nBE AH My Oy Oo Oy Hy\nHOH OW Oe Oo Oy Oo Oo Oo oo\nEK OH Oy Oy Oo Oo Go Oo Oo So\n= OF OF Oe Co 00 So Oo Oo oo\nIL &X 64 \u00a94 Oy Oe Og Oo bo Oo Oo\n> 6X 04 68 08 C0 00 G0 G0 Se So\n&X 04 68 08 Ca b9 C0 Oo Oo co\nAX 68 08 08 60 60 60 60 60 60\n1 68 08 00 60 60 00 G0 60 co\n0F 08 00 00 00 00 00 G0 00 co\n\n+\nI\n\nAadcanknt\nCM ol oll bol bad\nCa ol nal bal\n\u00a9 (Oo\nOh oe em\nOh oe ee\nho oe oe\nrrr re\nrrr\nrrsrrt\n\nQ\naS\n\nBee~ea2~2a2a\neRDB~A~ABWAA\n\n2222222\nFtBTTzIAIIA2\n\n2\n\n%\n\nO\u00a9b666680222\n4%\n\n4\n\n44\n\nGwe\n\nbb YY vv\nbb b&w vv\nbby) by bv vv\nbbb bb bvvvus\nGP Voy s ov\nSho b bb YvYYYY\nbbb bb bv vb\nbob bo ooo\nbBHHHHHHHHY\nGOKVOHOHIVVAD\nness of ink, to name a few. While the overall D is large, it is likely that only a few K dimensions of\nD are needed to capture the variability in strokes of some digits (see Fig. 3).\nEpitomic VAE can be viewed as a variational autoencoder with latent stochastic dimension D that\nis composed of a number of sparse variational autoencoders called epitomes, such that each epitome\npartially shares its encoder-decoder architecture with other epitomes in the composition. In this\npaper, we assume simple structured sparsity for each epitome: in particular, only K contiguous\ndimensions of D are active?.\nThe generative process can be described as follows: A D-dimensional stochastic variable z is drawn\nfrom a standard multivariate Gaussian p(z) = N(z;0;J). In tandem, an epitome is implicitly\nchosen through an epitome selector variable y, which has a uniform prior over possible epitomes.\nThe N-dimensional observation x is then drawn from a Gaussian distribution:\nm, enforces the epitome constraint: it is also a D-dimensional vector that is zero everywhere except\nin the active dimensions of the epitome. \u00a9 is element-wise multiplication between the two operands.\nThus, m, masks the dimensions of z other than those dictated by the choice of y. Fig. 3 illustrates\nthis for an 8-d z with epitome size A\u2019 = 2, so that there are four possible epitomes (the model also\nallows for overlapping epitomes, but this is not shown for illustration purposes). Epitome structure\nis defined using size K and stride s, where s = 1 corresponds to full overlap in D dimensions. Our\nmodel generalizes the VAE and collapses to a VAE when D = K = s.\nf(\u00a2) and f(\u00a9) define non-linear deterministic transformations of \u00a2 modeled using neural networks\nNote that the nen does not snip off the v dimensions corresponding to an epitome, but instead\ndeactivates the D-K dimensions that are not part of the chosen epitome. While the same deterministic\nfunctions f; and fo are used for any choice of epitome, the functions can still specialize due to the\nallows for incorporating other forms of structured sparsity.\nFigure 3: Left: Illustration of an epitomic VAE with dimension D=8, epitome size K=2 and stride S=2. In this\ndepiction, the second epitome is active. Right: Learned manifolds on MNIST for 4 different epitomes in a 20-d\neVAE with size kK = 2 and stride s = 1. We observe that each epitome specializes on a coherent subset of\nexamples.\npe(x|y,z) = N(x; fi(m, \u00a9 z), exp(fo(m, \u00a9 z)))\n>The strided epitome structure allows for learning O(D) specialized subspaces, that when sampled during\ngeneration can each produce good samples. In contrast, if only a simple sparsity prior is introduced over\narbitrary subsets (e.g. with Bernoulli latent units to specify if a unit is active for a particular example), it can\nlead to poor generation results, which we confirmed empirically but do not report. The reason for this is as\nfollows: due to an exponential number of potential combinations of latent units, sampling a subset from the\nprior during generation cannot be straightforwardly guaranteed to be a good configuration for a subconcept in\nthe data, and often leads to uninterpretable samples.\nsparsity of their inputs. Neighboring epitomes will have more overlap than non-overlapping ones.\nwhich manifests itself in the representation space; an intrinsic ordering in the variability is learned."}, {"section_index": "4", "section_name": "3.1 OVERCOMING OVER-PRUNINC", "section_text": "Following Kingma & Welling (2014), we use a recognition network q(z, y|x) for approximate pos-\nterior inference, with the functional form\nwhere ps = hy(x) and \u00a2 = hg(x) are neural networks that map x to D dimensional space.\nWe use a similar masking operation to deactivate units, as decided by the epitome y. Unlike the\ngenerative model (eq. 7), the masking operation defined by y operates directly on outputs of the\nrecognition network that characterizes the parameters of g(z|y.x).\nAs in VAE, we can derive the lower bound on the log probability of a dataset, and hence the cos\nfunction (negative bound) is\nT\nfevae = ~ >. Byayxcoyllog p(x |y, 2)]\n\nt=1\n\n-\u00a5> Kt [astoix) | poto)] - Y Tawa\" KEL aoCalanx\u201d) | pol2)\n\nt=1\nThe epitomic VAE departs from the VAE in how the contribution from the KL term is constrained. Let us\nconsider the third term in eq. 10, and substituting in eq. 9:\ny Davin) aoleluex) Nl vol)\n\nT\nSo aolylx KL [em \u00a9 p, exp (my \u00a9 6)) ||. M(%: 0,1)\n\nt=1 y\n\nT D -\nSOD aolule) 0 afmay = MAL] Cain? exe(o\"?) | ACO.1)\n=1 .\n\nt=1 y d\nwhere 1|x] is an indicator variable that evaluates to | if only if its operand x is true\nFor a training example x\" and for a fixed y (and hence the corresponding epitome), the number of KL terms\nthat will contribute to the bound is exactly kK\u2019. The dimensions of z that are not part of the corresponding\nepitome will have zero KL because their posterior parameters are masked to have unit Gaussian, the same as\nthe prior. By design, this ensures that only the K dimensions that explain x) contribute to Cenae-\nThis is quite in contrast to how VAE optimizes Cyae (\u00a7. 2.1). For Cuae to have a small contribution from the\nKL term of a particular zg, it has to infer that unit to have zero mean and unit variance for many example:\nin the training set. In practice, this results in VAE completely deactivating units, and leading to many deac\nunits. EpitomicVAE chooses the epitome based on x) and ensures that the dimensions that are not usefu\nin explaining x) are ignored in C.yae. This means that the unit is still active, but by design, only a fracti\nof examples in the training set contributes a possible non-zero value to za\u2019s KL term in Cevae. This addec\nflexibility gives the model the freedom to use more total units without deactivating them, while optimizing the\nbound. With these characteristics, during training, the data points will naturally group themselves to differen\nepitomes, leading to a more balanced use of z.\nIn Fig. 4 we compare the activity levels of VAE, dropout VAE and our model. We see that compared with VAE\nour model is able to better use the model capacity. In the same figure, we also compare with adding dropout t\u00a2\nthe latent variable z of the VAE (Dropout VAE). While this increases the number of active units, it generalize:\npoorly as it uses the dropout layers to merely replicate representation, in contrast to eVAE. See Fig. 5 alongs\nwith the explanation in \u00a7 4.1 where we compare generation results for all three models.\na(y|x)q(zly, x)\na(y|x)N (2; my \u00a9 p, exp (my \u00a9 6))\n\nq(z, y|x)\nFigure 4: Adding dropout to a VAE (here, dropout rate 0.5 is shown) can prevent the model from pruning\nunits, shown for MNIST. However, in contrast to eVAE, it uses the additional units to encode redundancy, not\nadditional information, and therefore does not address the problem. Generation results are shown in Fig. 5."}, {"section_index": "5", "section_name": "3.2 TRAINING", "section_text": "The generative model and the recognition network are trained simultaneously, by minimizing Cexae in eq. 10\nFor the stochastic continuous variable z, we use the reparameterization trick as in VAE. The trick involve:\nreparametrizing the recognition distribution in terms of auxiliary variables with fixed distributions. This allow:\nefficient sampling from the posterior distribution as they are deterministic functions of the inputs and auxiliary\nvariables.\nFor the discrete variable y, we cannot use the reparameterization trick. We therefore approximate q(y|x)\nby a point estimate y* so that g(y|x) = 6(y = yx), where 6 evaluates to 1 only if y = yx and the bes!\ny* = argminCevae. We also explored modeling q(y|x) = Cat(h(x)) as a discrete distribution with /\nbeing a neural network. In this case, the backward pass requires either using REINFORCE or passing throug!\ngradients for the categorical sampler. In our experiments, we found that these approaches did not work well\nespecially when the number of possible values of y becomes large. We leave this as future work to explore.\nAlgorithm 1 Learning Epitomic VAE\n0, @ \u00abInitialize parameters\nfor until convergence of parameters (6, \u00a2) do\n\nAssign each x to its best yx = arg min Cyyae\n\nRandomize and then partition data into minibatches with each minibatch having proportion-\nate number of examples V y\n\nfor k \u20ac numbatches do\n\nUpdate model parameters using k\u201d minibatch consisting of x, y pairs\n\nend for\n\nend for"}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "We present experimental results on two datasets, MNIST (LeCun et al., 1998) and Toronto Faces Database\n(TFD) (Susskind et al., 2010). We show generation results that illustrate eVAE\u2019s ability to better utilize mode\ncapacity for modeling data variability, and then evaluate the effect of epitome choice and model complexity. Fi:\nnally we present quantitative comparison with other models and qualitative samples from eVAE. We emphasiz\u00a2\nthat in all experiments, we keep the weight of the KL term \\ = | to evaluate performance under optimizing the\ntrue derived lower bound, without introducing an additional hyperparameter to tune.\n14 e =VAE\nLe 4 Dropout VAE\n\u201cye = eVAE\n\nos,\n5\nz o\n= 06 : Aeaedugy\ne\n04\n*.\n02\n\u00b0\n0.0 Peccccccccccccccccocs\n0 10 20 30 40 50\n\nTiyit\n14\n\n1.2\n\nS\nme\n\nActivity\n\nad\nEs\n\n0.2\n\n0.0\n\na\n\nVAE\n\nDropout VAE\neVAE\n\n10\n\n20\nThe recognition network first computes pz and @. It is then combined with the optimal y* for each example, to\narrive at the final posterior. The model is trained using a simple algorithm outlined in Algo. 1. Backpropaga-\ntion with minibatch updates is used, with each minibatch constructed to be balanced with respect to epitome\nsignment.\nWe use standard splits for both MNIST and TFD. In our experiments, the encoder and decoder are fully-\nconnected networks, and we show results for different depths and number of units of per layer. ReLU non.\nlinearities are used, and models are trained using the Adam update rule (Kingma & Ba, 2014) for 200 epoch:\n(MNIST) and 250 epochs (TFD), with base learning rate 0.001.\n20-d\n\n10-d\n\ni-d\n\n2-d\n\nWHT ey DiK mH Oo\nOMY|EASO QOS\nPUMWHVA HK wREM\nYN RP KAOM hE LY\nARK CeHIeny\nOm YXArecaoc<\nvy TAs DON noe\nGrd mad Sum be\nAHHH -NORS\nQeAwrtrrangnn\n\nSO ODNLMYMS OO\nTODoeHVYNOOS\nDY anech eng\nLaAKROD EO\nPownn-YrowRe\nDeeaMs@ovy \u00a2\nSoOKRONNARHARY\nSCrROSSAH AN\nPy FE ONOVEW\nQSy~Havn\n\nBWI Ode\nHARWRHWOVGs\nQHdserrOons\nBWITAUATS Hoes\nAr\u2014MWB@ en-d\nQraANVaAsnss\n~Wee-Awxnrhys\nNWosd enn oo\nbd SV AYOT CS\nSHMVIMOMHEN\n\ncecegesnas\neon rasa\n\u2014soms~9 bo fF\nKsmsooOr~ve\nHT\u2014-Hxer grease\nLIAS rAd Te2I9a\n2Q[Fme2~ Om Nd\nase GsecTm~ 9\n\no~rkoo-Os\nSrxYNOK\u2014AgGoON\n\naVA\n\nRAND KO GH\nOORT HOD\nVIDMOHHNOMm\nBHAAHS Tee wm\nMETOE THAN iINe\n\u00a9 BG) OF MO 9 OF\nHATH CHM rTOBe\ntee E S&H wrod\n\u00ae & Org MD MA OH\nWow oHn gm > S\n\nDmMOHYVRAHPDWY\nHPCKrFORMLMPSEH\nSOCSITAMPONHD\nNe HOSVENTSH\nWoe nwo gos\nSHOBOWVHKTTH\nWlEESTARHOD\nAOKI HOH OOM\nRORERMHMHOOR\nPHODRMARVDOGS\n\na Eo)\nSHVFSHSODWKG\nSeaOMe Teas\nHSOVITHAISGH\nFTHOBTBe KHBO\nSHHVIASABIS\nQBHAANMONSOSD\nQOQZer gua des\nVQAHCSHHTTOM\nMIFONOHOHFHRH\n\nHAA ASNH HAH\nHSHORHKRRME\nHICKSON Md ae\nAa HN MN & of Hoo\nVIONAIQaNM HOR\nHFEKHKRHHHH &\nBa KRANQAMS eH\nSha obM@n Fr XM\nFAFHOORKGWOH\nRHOSRAH KKM\n\nAVA ynodorq\n\nSAV SHV ALTIO\nHe bose smN ee\nVFrosh foYHHoem\n& 4 \u00a9 Hm \u2014 & [FIs\naPamhoyrr\ns-efsscend\nHHHMHOeERY TH\n(0 00 cw & Oo[F]T tH\nasc mw nok oO\nMVNBLIMOMHOHY\n\nSTFrOMHKOTSIS\n&} & Oo FH MH & ON Y WH 0\nFRroCROMG YE &\nSIATTAMHH~\nMmsdrOrgge\nMrOBs*r@owswy\nDoge KH H ms\nes GOTH TVS\nSo gem tmMemd\nO= FM e&Hd K&D\n\nHee Bnoad dad\nsa radNvrey\u2014e\nSerr OmM~ &e\nTry ~nGwvo-9\nNOOCNVYTTK\nVWSoMEMCONANS\n0 00 tow we ow ed\nSOeT\u2014-Hr-T\nSo-~\u2014THHSHM\nOHssSAxMbLaAHO\n\nMoar Sd On. vod\ne2osfnItmdasd\n\u2014NOMF~DROD\nKs msaoOr~ve\nHT\u2014-HMRXOTeHaAD\nbresagre2id\n2OrM2\u2014-OHNG\nAs OosecTH-9F\nO~TKRT\u2014-OTS\nSreONOeAadon\n\nAVA?\nFigure 5: Generations from VAE, Dropout VAE, and eVAE models for different dimensions of latent variable\nz. Across each row are 2-d, 5-d, 10-d, and 20-d models. VAE generation quality (1st row) degrades as latent\ndimension increases, and it is unable to effectively use added capacity to model greater variability. Adding\ndropout to the VAE (2nd row) fails to solve the problem since additional units are used to encode redundancy,\nnot additional information. eVAE (3rd row) overcomes the problem by modeling multiple shared subspaces,\nhere 2-d (overlapping) epitomes are maintained as the latent dimension is increased. Learned epitome manifolds\nfrom the 20-d model are shown in Fig. 3. Boxed digits highlight the difference in variability that the VAE vs.\neVAE model is able to achieve."}, {"section_index": "7", "section_name": "4.1 OVERCOMING OVER-PRUNING.", "section_text": "We first qualitatively illustrate the ability of eVAE to overcome over-pruning and utilize latent capacity to mode\ngreater variability in data. Fig. 5 compares generation results for VAE, Dropout VAE, and eVAE for differen\ndimensions D of latent variable z. With D = 2, VAE generates realistic digits but suffers from lack of diversity\nWhen D is increased to 5, the generation exhibits some greater variability but also begins to degrade in quality\nAs D is further increased to 10 and 20, the degradation continues. As explained in Sec. 2.1, this is due t\nVAE\u2019s propensity to use only a portion of its latent units for modeling the training data and the rest to minimiz\nthe KL term. The under-utilization of model capacity means that VAE learns to model well only regions of th\nposterior manifold near training samples, instead of generalizing to model the space of possible generation:\nThe effect of this is good reconstruction (examples are shown in Fig. 9) but poor generation samples.\nAdding dropout to the latent variable z of the VAE (row 2 of Fig. 5) encourages increased usage of model\ncapacity, as shown in Fig. 4 and the discussion in Sec. 2. However, due to the stochastic nature of dropout,\nthe model is forced to use the additional capacity to encode redundancy in the representation. It therefore\ndoes not achieve the desired effect of encoding additional data variability, and furthermore leads to blurred\nmples due to the redundant encoding. Epitomic VAE addresses the crux of the problem by learning multiple\necialized subspaces. Since the effective dimension of any example is still small, eVAE is able to model each\nsubspace well, while encoding variability through multiple possibly shared subspaces. This enables the model\nto overcome over-pruning from which VAE suffered. Fig. 5 shows that as the dimension D of z is increased"}, {"section_index": "8", "section_name": "4.2 CHOICE OF EPITOME SIZE", "section_text": "We next investigate how the choice of epitome size, K, affects generation performance. We evaluate the\ngenerative models quantitatively through their samples by measuring the log-density with a Parzen window\nestimator Rifai et al. (2012). Fig. 6 shows the Parzen log-density for different choices of epitome size or\nMNIST, with encoder and decoder consisting of a single deterministic layer of 500 units. Epitomes are non-\noverlapping, and the results are grouped by total dimension D of the latent variable z. For comparison, we alsc\nshow the log-density for VAE models with the same dimension D, and for mixture VAE (mVAE), an ablative\nversion of eVAE where parameters are not shared. mVAE can also be seen as a mixture of independent VAEs\ntrained in the same manner as eVAE. The number of deterministic units in each mVAE component is computec\nso that the total number of parameters is comparable to eVAE.\nAs we increase D, the performance of VAE drops significantly, due to over-pruning. In fact, the number o!\nactive units for VAE are 8, 22 and 24 respectively, for D values of 8, 24 and 48. In contrast, eVAE performance\nincreases as we increase D, with an epitome size K that is significantly smaller than D. Table 1 provides mor\u00ab\n\ncomparisons. This confirms the advantage of using eVAE to avoid overpruning and effectively capture dat\ndistribution.\nLog-density\n\n300\n\n250\n\n200\n\nll\n\nD=24\n\ni\n\n23 4\n\noo\n\nEpitome size\n\nJovAElomVAE\n\nO0eVAE\nFigure 6: Epitome size vs. Parzen log-density (nats) on MNIST, grouped by different dimensions D of latent\nvariable z. VAE performance for equivalent D is shown for comparison, as well as mVAE (ablative version of\neVAE without parameter sharing). For each D. the optimal epitome size nificantly smaller than D."}, {"section_index": "9", "section_name": "4.3. INCREASING COMPLEXITY OF ENCODER AND DECODER", "section_text": "Here, we would like to understand the role of encoder and decoder architectures on over pruning, and the\ngenerative performance. We control model complexity through number of layers L of deterministic hidden\nunits, and number of hidden units H in each deterministic layer.\nTable 1 shows the Parzen log-densities of VAE, mVAE and eVAE models trained on MNIST and TFD with\ndifferent latent dimension D. For mVAE and eVAE models on MNIST, the maximum over epitomes of size\nsed, and on TFD epitomes of size K \u2014 5 are used. All epitomes are non-overlapping.\nWe observe that for VAE, increasing the number of hidden units H (e.g. from 500 to 1000) for a fixed network\ndepth L has a negligible effect on the number of active units and performance. On the other hand, as the depth\nof the encoder and decoder L is increased, the number of active units in VAE decreases though performance is\nstill able to improve. This illustrates that increase in the complexity of the interactions through use of multiple\nwhile maintaining epitomes of size K = 2, eVAE is able to model greater variability in the data. Highlighted\ndigits in the 20-d eVAE show multiple styles such as crossed versus un-crossed 7, and pointed, round, thick, and\nthin 4s. Additional visualization of the variability in the learned 2-d manifolds are shown in Fig. 3. In contrast,\nthe 2-d VAE generates similar-looking digits, and is unable to increase variability and maintain sample quality\nas the latent dimension is increased.\neVAE also performs comparably or better than mVAE at all epitome sizes. Intuitively, the advantage of pa-\nrameter sharing in eVAE is that each epitome can also benefit from general features learned across the training\nset.\nlayers counteract the perils of the over-pruning. However, this comes with the cost of substantial increase in\nthe number of model parameters to be learned.\nIn contrast, for any given model configuration, eVAE is able to avoid the over-pruning effect in the numbe\nof active units and outperform VAE. While both VAE and eVAE approach what appears to be a ceiling it\ngenerative performance with large models for MNIST, the difference between VAE and eVAE is significant fo\nall TFD models.\nTable 1 also shows results for mVAE, the ablative version of eVAE where parameters are not shared. Th\nnumber of deterministic units per layer in each mVAE component is computed so that the total number o\nparameters is comparable to eVAE. While mVAE and eVAE perform comparably on MNIST especially witl\nlarger models (reaching a limit in performance that VAE also nears), eVAE demonstrates an advantage o1\nsmaller models and when the data is more complex (TFD). These settings are in line with the intuition tha\nparameter sharing is helpful in more challenging settings when each epitome can also benefit from genera\nfeatures learned across the training set.\nTable 1: Parzen log-densities in nats of VAE, mVAE and eVAE for increasing model parameters, trained o1\nMNIST and TFD with different dimensions D of latent variable z. For mVAE and eVAE models on MNIS1T\nthe maximum over epitomes of size K = 3 and K = 4 is used, and on TFD epitomes of size K = 5 are used\nAll epitomes are non-overlapping. Across each row shows performance as the number of encoder and decode\nlayers L increases for a fixed number of hidden units H in each layer, and as H increases. Number of activi\nunits are indicated in parentheses."}, {"section_index": "10", "section_name": "4.4 COMPARISON WITH OTHER MODELS", "section_text": "In Table 2 we compare the generative performance of eVAE with other models, using Parzen log-density\nVAE\u2122, mVAE_, and eVAE\u2122 refer to models trained using the same architecture as Adversarial Autoencoders.\nfor comparison. Encoders and decoders have L = 2 layers of H = 1000 deterministic units. D = \u00a7\nfor MNIST, and D = 15 for TFD. VAE, mVAE, and eVAE refer to the best performing models over all\narchitectures from Table 1. For MNIST, the VAE model is (L, H, D) = (3,500,8), mVAE is (3, 1000, 24)\nand eVAE is (3,500, 48). For TFD, the VAE model is (3,500, 15), mVAE is (3, 1000, 50), and eVAE is\n(3.500. 25).\nWe observe that eVAE significantly improves over VAE and is competitive with several state-of-the-art model:\nnotably Adversarial Autoencoders. Samples from eVAE on MNIST and TED are shown in Fig. 7.\nAH = 500 H = 1000\nL=1 L=2 L=3 L=1 L=2 L=3\nMNIST\nVAE | 283(8) 292(8) 325(8) 283(8) 290(8) 322(6)\nmVAE | 300(8) \u2014328(8) 337(8) | 3098) 333(8) 335(8)\neVAE | 300(8) 330(8) \u2014 337(8) | 312(8) \u2014331(8) 334(8)\n\nVAE | 213(22) \u2014273(11) 305(8) | 219(24) \u2014 270(12) 311(7)\nmVAE | 309(24) 330(24) = 336(24) | 313(24) 333(24) 338(24)\neVAE | 311(24) 331(24) 336(24) | 317(24) 332(24) 336(24)\n\nies]\n\nVAE | 213(24) \u2014 267(13) 308(8) | 224(24) \u2014 273(12) 309(8)\nmVAE | 314(48) 334(48) \u2014 336(48)_-| 315(48) \u2014-333(48) = 337(48)\neVAE | 319(48) 334(48) 337(48) | 321(48) 334(48) \u2014 332(48)\n\nies]\n\nTED\nVAE - 2173(15) 2180(15) - 2149(15) 2116(15)\nmVAE - 2276(15) 2314(15) - 2298(15) 2343(15)\neVAE - 2298(15) 2353(15) - 2278(15) 2367(15)\nVAE - 2067(25) 2085(25) - 2037(25) 2101(25)\nmVAE - 2287(25) \u2014 2306(25) - 2332(25) 2351(25)\neVAE - 2309(25) 2371(25) - 2297(25) 2371(25)\nVAE - 1920(50) 2062(29) - 1886(50) 2066(30)\nmVAE - 2253(50) 2327(50) - 2280(50) 2358(50)\n\neVAE - 2314(50) 2359(50) - 2302(50) 2365(50)\nMethod MNIST(10K) = TFD(10K)\nDBN 138+2 1909+ 66\nDeep CAE 121+1 2110+50\nDeep GSN 21441 1890 + 29\nGAN 22542 2057+ 26\nGMMN + AE 282+2 2204+ 20\nAdversarial AE 340+2 2252+ 16\nVAE~ 29042 2149+ 23\nmVAE\u2014 33342 2298+ 23\neVAE\u2014 33142 2278+ 26\nVAE 32542 2180+ 20\nmVAE 338+2 2358 + 20\neVAE 3387+2 2371+ 20\neEeTQwo= Cadre\nO~S~IDARN Coe\nSOM Q by Am T Oo &\n00 Ot SV th\u2014 Km\n~~ HHS Roe 4 QO\nSXYrRObLerHKS\nNHAIVO SM QO\nD2 O ln NW & \u00ab+ On S\n\n\u201ci Tr A ~ hm) Se A\nFigure 7: eVAE samples for MNIST (left) and TFD (right)."}, {"section_index": "11", "section_name": "5 RELATED WORK", "section_text": "A number of applications use variational autoencoders as a building block. In Gregor et al. (2015), a generative\nmodel for images is proposed in which the generator of the VAE is an attention-based recurrent model that is\nconditioned on the canvas drawn so far. Eslami et al. (2016) proposes a VAE-based recurrent generative model\nthat describes images as formed by sequentially choosing an object to draw and adding it to a canvas that is\nupdated over time. In Kulkarni et al. (2015), VAEs are used for rendering 3D objects. Conditional variants of\nVAE are also used for attribute specific image generation (Yan et al., 2015) and future frame synthesis (Xue\net al., 2016). All these applications suffer from the problem of model over-pruning and hence have adopted\nstrategies that takes away the clean mathematical formulation of VAE. We have discussed these in \u00a7 2.1.\nA complementary approach to the problem of model pruning in VAE was proposed in Burda et al. (2015); the\nidea is to improve the variational bound by using multiple weighted posterior samples. Epitomic VAE provides\nimproved latent capacity even when only single sample is drawn from the posterior.\nTable 2: Parzen log-densities in nats on MNIST and TFD. VAE~, mVAE~, and eVAE\u2122 refer to models trained\nusing the same architecture as Adversarial Autoencoders, for comparison. VAE, mVAE, and eVAE refer to the\nbest performing models over all architectures from Table 1.\nwheel ly \u2018 ea -\n\n\u2018 ay ht bakes T\ndaw! A ae keodeae La ~\n. ww =e\na od . whe bap whet\n\noti Bie\n\n4\n\u201ca-\u00a5\n\n\\ 1 cor\nRelated is the research in unsupervised sparse overcomplete representations, especially with group sparsity\nconstraints c.f. (Gregor et al., 2011; Jenatton et al., 2011). In the epitomic VAE, we have similar motivations\nthat enable learning better generative models of data."}, {"section_index": "12", "section_name": "6 CONCLUSION", "section_text": "This paper introduces Epitomic VAE, an extension of variational autoencoders, to address the problem of mode\nover-pruning, which has limited the generation capability of VAEs in high-dimensional spaces. Based on th\nintuition that subconcepts can be modeled with fewer dimensions than the full latent space, epitomic VAI\nmodels the latent space as multiple shared subspaces that have learned specializations. We show how this mode\naddresses the model over-pruning problem in a principled manner, and present qualitative and quantitativ\nanalysis of how eVAE enables increased utilization of the model capacity to model greater data variability\nWe believe that modeling the latent space as multiple structured subspac a promising direction of work\nand allows for increased effective capacity that has potential to be combined with methods for increasing th\nflexibility of posterior inference."}, {"section_index": "13", "section_name": "7 ACKNOWLEDGMENTS", "section_text": "We thank the reviewers for constructive comments. Thanks to helpful discussions with Marc\u2019 Aurelio Ranzato.\nJoost van Amersfoort and Ross Girshick. We also borrowed the term \u2018epitome\u2019 from an earlier work of Jojic\net al. (2003)."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. JCLR, 2015\nD.P. Kingma and M. Welling. Auto-encoding variational bayes. JCLR, 2014.\nYann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits, 1998\nMethods to increase the flexibility of posterior inference are proposed in (Salimans et al., 2015; Rezende &\nMohamed, 2016; Kingma et al., 2016). In Rezende & Mohamed (2016), posterior approximation is constructed\nby transforming a simple initial density into a complex one with a sequence of invertible transformations.\nIn a similar vein, Kingma et al. (2016) augments the flexibility of the posterior through autoregression over\nprojections of stochastic latent variables. However, the problem of over pruning still persists: for instance,\nKingma et al. (2016) enforces a minimum information constraint to ensure that all units are used.\nS. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geoffrey E. Hinton.\nAttend, infer, repeat: scene understanding with generative models. CoRR, abs/1603.08575, 2016.\nCarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent\nneural network for image generation. arXiv preprint arXiv: 1502.046239, 2015.\nD.J.C. Mackay. Local minima, symmetry-breaking, and model pruning in variational free energy minimization.\n2001.\nDanilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprin\narXiv: 1505.05770, 2016.\nSalah Rifai, Yoshua Bengio, Yann Dauphin, and Pascal Vincent. A generative process for sampling contractiv\nauto-encoders. arXiv preprint arXiv: 1206.6434, 2012.\njosh M Susskind, Adam K Anderson, and Geoffrey E Hinton. The toronto face database. Department |\nComputer Science, University of Toronto, Toronto, ON, Canada, Tech. Rep, 3, 2010.\nTianfan Xue, Jiajun Wu, Katherine L. Bouman, and William T. Freeman. Visual dynamics: Probabilistic future\nframe synthesis via cre onvolutional networks. arXiv preprint arXiv: 1607.02586, 2016.\nXinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generatior\nfrom visual attributes. CoRR, abs/1512.00570, 2015.\nFigure 8: Reconstructions for a 50-d VAE with KL weight A = 1, 0.5, and 0.2. The top half of each figur\nare the original digits, and the bottom half are the corresponding reconstructions.\nWe visualize VAE reconstructions as the KL term weight \\ is tuned down to keep latent units active. The top\nhalf of each figure are the original digits, and the bottom half are the corresponding reconstructions. While\nreconstruction performance is good, generation is poor (Fig. 1). This illustrates that VAE learns to model well\nonly regions of the posterior manifold near training samples, instead of generalizing to model well the full\nposterior manifold.\nRBA oa KOGA Oo\nBN TB BAN HG 6\nA AMYH\u2014AM YM)\nAPra~ WA TaA~W\nmreKOnMnTreKONn\nTPREHDEAOOD\ni\nYU-rTOKNhH\u2014- 70\n% FH HQ * b& FH HO *\nSa ef on 19 WH eH Lo w\u2014\n\nARR ALAR AN\nROY SHAKE HAM\nFORA POR \u2014\nQVEMAHwML YG HB.\nOvo MB o- Qo Eo\nin be PLE Oo DL\nPrggenrTdOd\neo yO ROeo Toh\nh 2 HIND | Pe eHO\nAN&eh&NN& Wy &\n\nQHAI~APANAI~A\nROU MAOMM SL\nSNOBA ANG BK\nOMS YS ANN DS\nOH\u00a5YWIAINYOVIS\nrw te hod\nQ oe 4 ly 0 Q Ly WH bo\naM\u2014-O~ oM\u2014O~\now egrXoralba\n~ Ping d ~ Pind\n\n= 0.2\n\n1.0\nRBA oP LOG one\nBN OD BAN OOO\nA AMYH\u2014AM YM)\nAPra~ WA TaA~W\nmreKOnMnTreKONn\nTPREHDEAOOD\nmo ty 00 BR te 02\nYU-rTOKNhH\u2014- 70\n% FH HQ * b& FH HO *\nSa ef on 19 WH eH Lo w\u2014\n\nARR ALAR AN\nROY SHAKE HAM\nFORE FOR \u2014\nQVAEA KY DH H&S,\nOvo MB o- Qo Eo\nin be PLE Oo DL\nPrdggensrd00\nee PRO TOE\nh 2 HIND | Pe eHO\nAN&eh&NN& Wy &\n\nQHAI~APANAI~A\nKROUVMEAROM HS\nBNOBA AND DE\nOMS YS ANN DS\nOH\u00a5YWIAINYOVIS\nrw te hod\nQ oe 4 ly 0 Q Ly WH bo\naM\u2014-O~ oM\u2014O~\now egrXoralba\n~ Ping dl ~ Piney\n\n= 0.2"}, {"section_index": "15", "section_name": "|.2 EFFECT OF INCREASING LATENT DIMENSION ON RECONSTRUCTION", "section_text": "in \u00a7 4.1, Fig. 5 shows the effect of increasing latent dimension on generation for VAE, Dropout VAE, and eVAE\nmodels. Here we show the effect of the same factor on reconstruction quality for the models. The top half o\n-ach figure are the original digits, and the bottom half are the corresponding reconstructions. As the dimensiot\nof the latent variable z increases from 2-d to 20-d, reconstruction becomes very sharp (the best model), bu\nveneration degrades (Fig. 5). Dropout VAE has poorer reconstruction but still blurred generation, while eVAE\nis able to achieve both good reconstruction and generation.\n20-d\n\n10-d\n\n5-\n\n2-d\n\nWSN THROM HO\nA WWD sO MM RD \u00a9 MH\nTAN SO~ TV ARO~\nOeE4SGOC4US9\nary pe WT Moo WE\nBye QosnsoQo\nds FM WEY Oo | WY\nade hor kde ho\nCABIMRKARQM\nBSN RtAqan pod\n\nPWR WAKES AK\nHHEPMQANATOCON\nShHewlboebh enw\nRoeHerhon-\u2014T\nSN EARQNN FAR\nSAB ONAKRE\nAHS O AK TON\nSEIS YH KIS\nSOREL IO hE\nMND RANKIN\n\nAKAKEOKIAT\nAr PEAT Ae\nBOQ nIe2voas\nNorssnorys\nBY~NOw\u00aeQ~KRO\nQHOEPYUROVG\nBr or fo \\ am 00 om Oo N\nTHORVTrOOBY\ntra Varo\nSXN-IRIN-ITHR\n\nQ~%e HROSHOHK\nQMAKIAQM KOH\n~DHMSMNVO TAM\nATTY OSA TYLON\nAFROIMTROG\nC~~\u2014erQr~\u2014-\u2014M9\nQe~FTFSQC-\u2014TO\nKPeViworengs\nANS AWONS AVS\nSa MBG Oo 09 DO Oo\n\naVA\n\nFAN AM MOAN MOM\nyeu ge lod oo\nWIT eSTOS OKT\nMUY~3 9% hI OG &\n~esu-~Twow-r\nQVIwrWQObwrhy\nWoOWPIW OW DBRS\nTrea gTorhorw\nAeQIrrgeoTre\nwo tes Nh oo NO\n\nSIVA EIVOMD\nase Acodrsero\nMNSOO~TMOONA\nMALKI-LOABIRKSD\nwWLOTrSsSESOTS\nVONONDGOYOS\nBN TOMA SOM\nIN-VYVHTORHH\nAKT OUTS\n2093-70090 6D\n\nMos ONO) OOD\nBAOEMHAO TH\n~BNVENS SHEN\nNov yw wergs\nHROYFO ROHS\nNIH IMQHO DO\naA- TON F Ow\u201d\nNhe cRe eos\noT DH HoT Dod\nASOoNNAST SOD\n\nASAGSKrOMAVAS\u2019A\n1D WHOM Eo oO HO &R\nWy WHOM MOM oS\nawe QAOoram\nTIN TN SHS OO\nAO So Pod Ko &\nSRaASFTHHKAOHD\nOY KAI Ogg eos\nWare rMoe@d Qo\nNeEr-~-KROKBRSOD\n\nAVA jhodoiq\n\nWO\u00a5VOWOKVO\nQ\u00a2ds3I7Od+3e\nz7T8beTTIb\n-O9ASD-OONO\nmo Hea\u2014 cc Qo\nRIM OGNTAOCHD\nSHVANMONHOMO\nBy ~ 1 Os Sh Oa \u2014 00 oH\nHSER OHIO\n\u00a9 fF %\u00ae MW odo MH\n\nAHWHRAAGOONG\nQensebhrnay\nCXNPFTOFRSTO\nRO~Q~ne\u2014-g\u2014\nMy aN 0o Se QoS\nSHBDOEROO PON\nsatHomQr Nm omg\nAN-3 4 NI \u20149 \u00a9\nPOA ETRY ~ OH\nrQb\u2014-QrdOb-O\n\n~ cee F-\u2014Govd\nReOw cen\nsR KH Yor\n~sSIHI~ VIMY\nSIYOMAIVIOMS\nAQhFrLOHYQOB\u00ae\nae\n\u201cVF Q~-V TON\nSNF FTMAN~ TM\nHr ONO Sy\n\nPFONd SBWAGS\nIHX we IEA RS\nSTXRVRVA TASS\nQ9~RBZNO~ BON\nATO. rea Tes\nFPULYTPOKS\nNA Wen forn\nHe uonSeoeoo\nROS Ce wR be\nPemr~al com\u2014 a\n\nAVA\nFigure 9: Reconstructions from VAE, Dropout VAE, and eVAE models for different dimensions of laten\nvariable z. Across each row are 2-d, 5-d, 10-d, and 20-d models. The top half of each figure are the origina\ndigits, and the bottom half are the corresponding reconstructions. The eVAE models multiple shared subspace\nby maintaining 2-d (overlapping) epitomes as the latent dimension is increased. eVAE is the only model tha\nachieves both good reconstruction and generation."}, {"section_index": "16", "section_name": "8.3. EVALUATION METRIC FOR GENERATION", "section_text": "Table 3 shows the log-likelihood bound and log-density for VAE and eVAE models as the dimension D of later\nvariable z is increased. For VAE, as D increases, the likelihood bound improves, but the log-density decrease:\nReferring to the corresponding generation samples in Fig. 11, we see that sample quality in fact decreas\ncounter to the likelihood bound but consistent with log-density. The reported VAE bounds and sample qualit\nalso matches Figs. 2 and 5 in Kingma & Welling (2014). On the other hand, eVAE log-density first decreas\nand then improves with larger D. We see that this is also consistent with Fig. 11, where eVAE samples fc\nD = 8 are the most interpretable overall, and D = 48 improves over D = 24 but still has some degenerat\nor washed out digits. (Note that these models are consistent with Kingma & Welling (2014) but are not th\nbest-performing models reported in our experiments.) Since our work is motivated by the generation task, w\ntherefore use log-density as the evaluation metric in our experiments.\nIntuitively, the reason why VAE improves the likelihood bound but generation quality still decreases can be seen\nin the breakdown of the bound into the reconstruction and KL terms (Table 3 and Fig. 10). The improvement\nof the bound is due to large improvement in reconstruction, but the KL becomes significantly worse. This\nhas a negative effect on generation, since the KL term is closely related to generation. On the other hand,\neVAE reconstruction improves to a lesser extent, but the KL is also not as strongly affected, so generation\nability remains stronger overall. As a result of this, simply tuning the KL weight ) in the training objective is\ninsufficient to improve VAE generation, as shown in Fig. | in the main paper.\nTable 3: Likelihood bound and log-density for VAE and eVAE as dimension D of latent variable z is increased.\nThe encoder and decoder for all models consist of a single deterministic layer with 500 units. eVAE models\nhave epitomes of size K = 4 for D = 8, and K = 8 for D = 24 and D = 48. The breakdown of the\nlikelihood bound into reconstruction term and KLD term is also shown.\nThere have been multiple approaches for evaluation of variational autoencoders, in particular log-likelihood\nlower bound and log-density (using the Parzen window estimator, Rifai et al. (2012)). Here we show that for\nthe generation task, log-density is a more appropriate measure than log-likelihood lower bound. Models are\ntrained on binarized MNIST, to be consistent with literature reporting likelihood bounds. The encoder and\ndecoder for all models consist of a single deterministic layer with 500 units.\nRec. term KLDterm Likelihood bound | Log-density\n\nD=8 -89.4 -16.6 -106.0 278\nVAE D=24 -61.1 -29.3 -90.4 152\nD=48 -59.1 -30.3 -89.4 151\nD=8 -110.1 -9.6 -119.7 298\neVAE D=24 -84.2 -15.7 -99.9 274\nD=48 -82.8 -14.2 -97.0 284\nNLL\n\n140\n\n120\n\n100\n\n80)\n\n60)\n\n40)\n\n20)\n\noe VAE- KL tem\n\u00a9 VAE- Reconstruction term\n\n+ VAE - Log-likelihood bound\n\noe eVAE- KL tenn\n+ eVAE - Reconstruction term\n\n+ eVAE - Log-likelibiood bound\n\neH\n\u00ab \u00b0\n\u00ab o rs\no\nD=8 D=2 D=48\nFigure 10: Likelihood bound for VAE and eVAE as D increases (shown as NLL). VAE improvement of the\nbound is due to significant reduction of reconstruction error, but at high cost of KL, which is closely related\nto generation. eVAE improves reconstruction more moderately, but also maintains lower KL, and has stronger\ngeneration overall.\nBSGONVARO WS\n\u201cerat rprwoaYe\nse SOADOO EL\nHid moved ob ss w\nSS tO A CG woo\nMHRIDAHTOVCe\nAYP ORBRSE KOT\nQrAnW SO WI) \u00bb\nOMeHMAV TMH WS\ncH BEd Sw WAN LP OF\n\nPM AMA OS ims\nWHOHMCYUSYSD\nFeo OWDQHeOWH\nWH OW AKO TI\nseeoreesnwg !|\nIMOKFRQOS'SB|AND\nCC reanvr~ ger\nWes VG unw ng\nFV-eAhAKWkeG\nANYwo Tweedy vs-or\n\nLOMA Y res eoe\nBOI ANOO LON\nAHPLRO@M ON \u2014\n\nO@r-ANAANKH SG \u00a9\nWAH oORMONN\nFRO AON OH KH\nvw NO cOeRK Z\nTHCSKMOVIORN\nMrN OMS IAMS\nMNOO~ NH ced \u00ae\n\n48)\n\nVAE (D\n\n24)\n\nVAE (D\n\nFCOCOOAN AS\nTHBWIDB Neon\nRDeerer>-KoOQVIN\nYMMdrn we ws.\nMSH HKH IA HON\n~H- BWRHVINCOE\nWOM KOMDCNO NY\nwe KHOHS CKR~\nSTMHANSORAAYSD\nFNSNOOEO-NVO\n\nSON TVAVVOD\nRev croasnmy\nNRK BeEn FP XRD-~\nHoceQOmrRroer&\nSDBAWHY Hens\nFY O-\u201ceYMNOY\nTATetTHO>-VY\nPA ade &\u20acovyusry\nASVIAwrrw KTH\nKAvS oO Pm OS\n\nMAGNA Box\nSA TINVOSTN\n-RONMVTVNAN\nSNYPOKRHOA\u2014F\nOHS THQ AK&QMS\nBosc TFMvV- vs\nbe r\u2014 NK MOeMQV\nhIPTSYsSQ On\nBSBOHMNer~~ Goose\nSe RMOePN ORES\n\n48)\n\neVAE (D\n\n24)\n\neVAE (D\n\neVAE (D = 8)\nFigure 11: Generation samples for VAE and eVAE as dimension D of latent variable z is increased. VAE\nsample quality decreases, which is consistent with log-density but not likelihood bound."}]
ry2YOrcge
[{"section_index": "0", "section_name": "LEARNING A NATURAL LANGUAGE INTERFACI\nWITH NEURAL PROGRAMMER", "section_text": "Arvind Neelakantan\u2019\nUniversity of Massachusetts Amher:\narvind@cs.umass.edu\nmccallum@cs.umass.edu"}, {"section_index": "1", "section_name": "BACKGROUND AND INTRODUCTION", "section_text": "Databases are a pervasive way to store and access knowledge. However, it is not straightforwarc\nfor users to interact with databases since it often requires programming skills and knowledge abou\ndatabase schemas. Overcoming this difficulty by allowing users to communicate with database:\nvia natural language is an active research area. The common approach to this task is by semantic\nparsing, which is the process of mapping natural language to symbolic representations of meaning\nIn this context, semantic parsing yields logical forms or programs that provide the desired respons\u00a2\nwhen executed on the databases . Semantic parsing is a challenging problen\nthat involves deep language understanding and reasoning with discrete operations such as counting\nand row selection (Li\n\u201cWork done at Google Brain.\nMartin Abadi\nGoogle Brain\nabadi@google.com\nqvl@google.com\nJamodei@openai.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Learning a natural language interface for database tables is a challenging task that\ninvolves deep language understanding and multi-step reasoning. The task is often\napproached by mapping natural language queries to logical forms or programs that\nprovide the desired response when executed on the database. To our knowledge,\nthis paper presents the first weakly supervised, end-to-end neural network model\nto induce such programs on a real-world dataset. We enhance the objective func-\ntion of Neural Programmer, a neural network with built-in discrete operations, and\napply it on WikiTableQuestions, a natural language question-answering dataset.\nThe model is trained end-to-end with weak supervision of question-answer pairs,\nand does not require domain-specific grammars, rules, or annotations that are key\nelements in previous approaches to program induction. The main experimental\nresult in this paper is that a single Neural Programmer model achieves 34.2% ac-\ncuracy using only 10,000 examples with weak supervision. An ensemble of 15\nmodels, with a trivial combination technique, achieves 37.7% accuracy, which is\ncompetitive to the current state-of-the-art accuracy of 37.1% obtained by a tradi-\ntional natural language semantic parser.\nThe first learning methods for semantic parsing require expensive annotation of question-program\n\npairs (Zelle & Mooney, |1996} |Zettlemoyer & Collins} |2005). This annotation process is no longer\n\nnecessary in the current state-of-the-art semantic parsers that are trained using only question-answer\npairs (Liang et al.|/2011}/Kwiatkowski et al. 2013} Krishnamurthy & Kollar} 2013} Pasupat & Liang}\n2015). However, the performance of these methods still heavily depends on domain-specific gram-\nmar or pruning strategies to ease program search. For example, in a recent work on building semantic\nparsers for various domains, the authors hand-engineer a separate grammar for each domain\n\net al.|[2015).\nthe notorious difficulty of handling discrete operations in neural networks (Joulin & Mikolov}|2015\n\nKaiser & Sutskever] |2016). Most of these approaches rely on complete programs as supervisiot\n\n(Jia & Liang} |2016} [Reed & Freitas} {2016) while others (Zaremba et al.| |2016| 2015)\n\nhave been tried only on synthetic tasks. The work that is most similar to ours is that of |Andrea:\net al.| (2016) on the dynamic neural module network. However, in their method, the neural networ\nis employed only to search over a small set of candidate layouts provided by the syntactic pars\u00a2\nof the question, and is trained using the REINFORCE algorithm (Williams |[992). Hence, theii\nmethod cannot recover from parser errors, and it is not trivial to adapt the parser to the task at hand\nAdditionally, all their modules or operations are parametrized by a neural network, so it is difficul\nto apply their method on tasks that require discrete arithmetic operations. Finally, their experiment:\nconcern a simpler dataset that requires fewer operations, and therefore a smaller search space, thar\nWikiTableOuestions which we consider in our work. We discuss other related work in Section 4.\nNeural Programmer (Neelakantan et al.|/2016) is a neural network augmented with a set of discrete\n\noperations. It produces both a program, made up of those operations, and the result of running the\nprogram against a given table. The operations make use of three variables: row selector, scalar\nanswer, and lookup answer, which are updated at every timestep. lookup answer and scalar answer\nstore answers while row selector is used to propagate information across time steps. As input, a\nmodel receives a question along with a table (Figure {ip. The model runs for a fixed number of\ntime steps, selecting an operation and a column from the table as the argument to the operation\nat each time step. During training, soft selection is performed so that the\nmodel can be trained end-to-end using backpropagation. This approach allows Neural Programmer\nto explore the search space with better sample complexity than hard selection with the REINFORCE\nalgorithm would provide. All the parameters of the model are learned from a weak\nsupervision signal that consists of only the final answer; the underlying program, which consists of\na sequence of operations and of selected columns, is latent.\nWhat was the timestep t\n\ntotal number of\ngoals scored in\n\n2005\n\u2018Column Operation\nSelection\n\u2018Season Team Country Competition Matches Goals\n1999 _Djurgardens IF Sweden Allsvenskan 15 1\n2000 \u2014\u2014Djurgardens IF Sweden Superettan 15 3\n2001 __Djurg\u00e9rdens IF Sweden Allsvenskan 227\n2002-2003 Grazer AK Austria Bundesliga 24 \u00a9 6\n2003 Denizlispor Turkey SdperLig 3 \u00b0\n2003 \u2014_Landskrona BolS Sweden Allsvenskan 11 3\n2004 \u2014\u2014_LandskronaBolS Sweden Allsvenskan 22 4\n2005 \u2014\u2014Djurglirdens IF Sweden Allsvenskan 24 12\n2006 \u2014__Djurgardens IF Sweden Allsvenskan 17 6\n2007 \u2014\u2014Djurgirdens IF Sweden Allsvenskan 234 Row Selector\n2008 \u2014\u2014Djurgardens IF Sweden Allsvenskan 29 6 fromtt\n2008-09 Esbjerg 18 Denmark Superiga 6 \u00b0 Y. romt1\n2010 AaB Denmark Superga 3 1\n2011 AssyriskaFF Sweden Superettan 19 5 Data from Table i\nTotal Total Total Total 23358\n\nTable\nFigure 1: Neural Programmer is a neural network augmented with a set of discrete operations. The\nmodel runs for a fixed number of time steps, selecting an operation and a column from the table at\nevery time step. The induced program transfers information across timesteps using the row selector\nvariable while the output of the model is stored in the scalar answer and lookup answer variables.\nIn this work, we develop an approach to semantic parsing based on Neural Programmer. We show\nhow to learn a natural language interface for answering questions using database tables, thus inte-\ngrating differentiable operations that are typical of neural networks with the declarative knowledge\ncontained in the tables and with discrete operations on tables and entries. For this purpose, we make\nseveral improvements and adjustments to Neural Programmer, in particular adapting its objective\nfunction to make it more broadly applicable.\n[n earlier work, Neural Programmer is applied only on a synthetic dataset. In that dataset, wher\nthe expected answer is an entry in the given table, its position is explicitly marked in the table\nHowever, real-world datas ertainly do not include those markers, and lead to many ambiguitie\n(e.g., . In particular, when the answer is a number that occurs literall\nin the table, it is not known, a priori, whether the answer should be generated by an operatio:\nor selected from the table. Similarly, when the answer is a natural language phrase that occur\nin multiple positions in the table, it is not known which entry (or entries) in the table is actuall\nresponsible for the answer. We extend Neural Programmer to handle the weaker supervision signa\nby backpropagating through decisions that concern how the answer is generated when there is ai\nambiguity.\nOur main experimental results concern WikiTableQuestions (Pasupat & Liang] |2015), a real-worl\n\nquestion-answering dataset on database tables, with only 10,000 examples for weak supervision\nThis dataset is particularly challenging because of its small size and the lack of strong supervision\nand also because the tables provided at test time are never seen during training, so learning require:\nadaptation at test time to unseen column names. A state-of-the-art, traditional semantic parser tha\nrelies on pruning strategies to ease program search achieves 37.1% accuracy. Standard neural net\nwork models like sequence-to-sequence and pointer networks do not appear to be promising for thi:\ndataset, as confirmed in our experiments below, which yield single-digit accuracies. In compari\nson, a single Neural Programmer model using minimal text pre-processing, and trained end-to-end\nachieves 34.2% accuracy. This surprising result is enabled primarily by the sample efficiency o:\nNeural Programmer, by the enhanced objective function, and by reducing overfitting via strong reg.\nularization with dropout (Srivastava et al. 2014} lyyer et al. 2015} Gal & Ghahramani 2016) anc\nweight decay. An ensemble of 15 models, even with a trivial combination technique, achieves 37.7%\naccuracy."}, {"section_index": "3", "section_name": ". NEURAL PROGRAMMER", "section_text": "In this section we describe in greater detail the Neural Programmer model and the modification:\nwe made to the model. Neural Programmer is a neural network augmented with a set of discrete\noperations. The model consists of four modules:\nA more detailed description of the basic model can be found in|Neelakantan et al.|(2016). The mode\n\nruns for fixed total of T timesteps. The parameters of the operations, selector module, question anc\n\u00a9 Question RNN that processes the question and converts the tokens to a distributed repre\n\nsentation. We use an LSTM network (Hochreiter & Schmidhuber| 1997) as the questiot\n\nRNN.\n\ne A list of discrete operations such as counting and entry selection that are manually defined\nEach operation is parameterized by a real-valued vector that is learned during training.\n\ne A selector module that induces two probability distributions at every time step, one ove\nthe set of operations and another over the set of columns. The input to the selector i\nobtained by concatenating the last hidden state of the question RNN, the hidden state of th\nhistory RNN from the current timestep, and the attention vector obtained by performin;\nsoft attention (Bahdanau et al.| on the question using the history vector. Followin;\n(2016), we employ hard selection at test time.\n\n\u00a2 History RNN modeled by a simple RNN (1990) with tanh activations which re\nmembers the previous operations and columns selected by the model. The input to th\nhistory RNN at each timestep is the result of concatenating the weighted representations o\noperations and columns with their corresponding probability distributions produced by th\nselector at the previous timestep.\nhistory RNNs are all learned with backpropagation using a weak supervision signal that consists\nof the final answer. Below, we discuss several modifications to the model to make it more broadly\napplicable, and easier to train."}, {"section_index": "4", "section_name": "2.1 OPERATIONS", "section_text": "We use 15 operations in the model that were chosen to closely match the set of operations used in the\nbaseline model (Pasupat & Liang] {2015). All the operations except select and most frequent entry\noperate only on the set of selected rows which is given by the row selector variable. Before the first\ntimestep, all the rows in the table are set to be selected. The built-in operations are:\nAll the operations are defined to work with soft selection so that the model can be trained with\nbackpropagation. The operations along with their definitions are discussed in the Appendix."}, {"section_index": "5", "section_name": "2.2 OUTPUT AND ROW SELECTOR", "section_text": "Neural programmer makes use of three variables: row selector, scalar answer and lookup answer\nwhich are updated at every timestep. The variable /ookup answer stores answers that are selected\nfrom the table while scalar answer stores numeric answers that are not provided in the table['] The\ninduced program transfers information across timesteps using the row selector variable which con-\ntains rows that are selected by the model.\nGiven an input table II, containing M rows and C columns (M and C can vary across examples),\nthe output variables at timestep \u00a2 are given by:\nwhere a?\u201d (op) and af\u00b0!(7) are the probabilities assigned by the selector to operation op and column\n\nj at timestep t respectively and output;(count) is the output of the count operation at timestep t.\nThe row selector variable at timestep \u00a2 is obtained by taking the weighted average of the outputs of\nthe remaining operations and is discussed in the Appendix. lookup answer\u2019r[i][j] is the probability\nthat the element (, 7) in the input table is in the final answer predicted by the model.\nWe modify the training objective of Neural Programmer to handle the supervision signal available\nin real-world settings. In previous work, the position of the answers are explicitly marked in the\ntable when the answer is an entry from the table. However, as discussed in Section 1, in real-world\n)) the answer is simply written down introducing two kinds\n\ndatasets (e.g., (Pasupat & Liang\n\nof ambiguities. First, when the answer is a number and if the number is in the table, it is not known\n'It is possible to extend the model to generate natural language responses using an RNN decoder but it is\nnot the focus of this paper and we leave it for further work.\ncount returns the number of selected rows in row selector.\n\nselect and most frequent entry are operations which are computed only once for every\nquestion and output a boolean tensor with size same as the size of the input table. An\nentry in the output of the select operation is set to 1 if the entry matches some phrase in\nthe question. The matched phrases in the question are anonymized to prevent overfitting.\nSimilarly, for most frequent entry, it is set to | if the entry is the most frequently occurring\none in its column.\n\nargmax, argmin, greater than, less than, greater than or equal to, less than or equal to are\nall operations that output a tensor with size same as the size of the input table.\n\nJirst, last, previous and next modify the row selector.\nprint operation assigns row selector on the selected column of lookup answer.\n\nreset resets row selector to its initial value. This operation also serves as no-op when the\nmodel needs to induce programs whose complexity is less than 7.\nFor scalar answers we compute the square loss:\nLcatar (scalar answerr, y) = 3 (scalar answery \u2014 yy\nwhere y is the ground truth answer. We divide Lscaiar by the number of rows in the input table and\ndo not backpropagate on examples for which the loss is greater than a threshold since it leads to\ninstabilities in training.\nWhen the answer is a list of items y = (a1,a2,...,an), for each element in the list (a;,i =\n1,2,...,.N) we compute all the entries in the table that match that element, given by S; =\n{(r,c), V (r,e) II[r][c] = a;}. We tackle the ambiguity introduced when an answer item occurs\nat multiple entries in the table by computing the loss only on the entry which is assigned the highest\nprobability by the model. We construct g \u20ac {0,1}\u201c*\u00b0, where g[i, j] indicates whether the element\n\n(i, 7) in the input table is part of the output. We compute log-loss for each entry and the final loss is\ngiven by:\nwhere [cond] is 1 when cond is True, and 0 otherwise\nSince we employ hard selection at test time, only one among scalar answer and lookup answer is\nmodified at the last timestep. We use the variable that is set at the last timestep as the final output o!\nthe model."}, {"section_index": "6", "section_name": "3 EXPERIMENTS", "section_text": "We apply Neural Programmer on the WikiTableQuestions dataset (2015) and\ncompare it to different non-neural baselines including a natural language semantic parser devel-\noped by |Pasupat & Liang} (2015). Further, we also report results from training the sequence-to-\nsequence model (Sutskever et al.||2014) and a modified version of the pointer networks (Vin\n\nfet al.|[2015). Our model is implemented in TensorFlow and the model takes ap-\nproximately a day to train on a single Tesla K80 GPU. We use double-precision format to store the\nmodel parameters since the gradients become undefined values in single-precision format. Our code\n\nis available at https: //github.com/tensorflow/models/tree/master/neural_"}, {"section_index": "7", "section_name": "3.1 DATA", "section_text": "We use the train, development, and test split given by{Pasupat & Liang] (2015). The dataset contain:\n\n11321, 2831, and 4344 examples for training, development, and testing respectively. We use thei\ntokenization, number and date pre-processing. There are examples with answers that are neithe\nwhether the loss should be computed using the scalar answer variable or the lookup answer variable.\nSecond, when the answer is a natural language phrase and if the phrase occurs in multiple positions\nin the table, we again do not know which entry (or entries) in the table is actually responsible for\ngenerating the answer. We extend Neural Programmer to handle this weaker supervision signal\nduring training by computing the loss only on the prediction that is closest to the desired response.\n\u2018lookup (lookup answer -p, y) > min(r,c)es;(\u2014log(lookup answer |r, c]))\n\n-a Sy gli, j| == O}log(1 \u2014 lookup answery[i, j])\n\ni=1 j=1\nWe deal with the ambiguity that occurs when the ground truth is a number and if the number also oc-\ncurs in the table, by computing the final loss as the soft minimum of Lgcatar and Licokup. Otherwise.\nthe loss for an example is Lcqiar When the ground truth is a number and Lj,okup) when the ground\nruth matches some entries in the table. The two loss functions Lgcatar and Ljookup are in different\nscales, so we multiply Licokup by a constant factor which we set to 50.0 after a small exploration in\nour experiments.\nTable 1: Performance of Neural Programmer compared to baselines from (Pasupat & Liang\n\nThe performance of an ensemble of 15 models is competitive to the current state-of-the-art natura\nlanguage semantic parser.\nnumber answers nor phrases selected from the table. We ignore these questions during training bu'\nthe model is penalized during evaluation following|Pasupat & Liang] . The tables provided ir\nthe test set are unseen at training, hence requiring the model to adapt to unseen column names at tes'\ntime. We train only on examples for which the provided table has less than 100 rows since we rur\nout of GPU memory otherwise, but consider all examples at test time."}, {"section_index": "8", "section_name": "3.2 TRAINING DETAILS", "section_text": "We use T = 4 timesteps in our experiments. Words and operations are represented as 256 dimen-\nsional vectors, and the hidden vectors of the question and the history RNN are also 256 dimensional.\nThe parameters are initialized uniformly randomly within the range [-0.1, 0.1]. We train the model\nusing the Adam optimizer with mini-batches of size 20. The e hyperparam-\neter in Adam is set to le-6 while others are set to the default values. Since the training set is small\ncompared to other datasets in which neural network models are usually applied, we rely on strong\n\nregularization:\nWe tune the dropout rates, regularization strength, and the \u00ab hyperparameter using grid search on the\ndevelopment data, we fix the other hyperparameters after a small exploration during initial experi-\nments.\nTable |1} shows the performance of our model in comparison to baselines from |Pasupat & Liang\n(2015). The best result from Neural Programmer is achieved by an ensemble of 15 models. The\nonly difference among these models is that the parameters of each model is initialized with a differ-\nent random seed. We combine the models by averaging the predicted softmax distributions of the\nmodels at every timestep. While it is generally believed that neural network models require a large\nnumber of training examples compared to simpler linear models to get good performance, our model\nWe clip the gradients to norm | and employ early-stopping.\n\nThe occurrences of words that appear less than 10 times in the training set are replaced by\na single unknown word token.\n\nWe add a weight decay penalty with strength 0.0001.\nWe use dropout with a keep probability of 0.8 on input and output vectors of the RNN, and\n\nselector, operation and column name representations (Srivastava et al.|/2014).\n\nWe use dropout with keep probability of 0.9 on the recurrent connections of the question\n\nRNN and history RNN using the technique from{Gal & Ghahramani] (2016).\nWe use word-dropout (Iyyer et al.||2015) with keep probability of 0.9. Here, words in the\nwit\n\nquestion are randomly replaced the unknown word token while training.\nTable 2: Model ablation studies. We find that dropout and weight decay, along with the boolean\nfeature indicating a matched table entry for column selection, have a significant effect on the perfor-\nmance of the model.\nWe did not get better results either by using pre-trained word vectors (Mikolov et al.||2013) or by\n2015\n\npre-training the question RNN with a language modeling objective ( ). A possible\nexplanation is that the word vectors obtained from unsupervised learning may not be suitable tc\nthe task under consideration. For example, the learned representations of words like maximum and\nminimum from unsupervised learning are usually close to each other but for our task it is counter:\nproductive. We consider replacing soft selection with hard selection and training the model with the\nREINFORCE algorithm (Williams|{1992). The model fails to learn in this experiment, probably be-\ncause the model has to search over millions of symbolic programs for every input question making\nit highly unlikely to find a program that gives a reward. Hence, the parameters of the model are not\nupdated frequently enough."}, {"section_index": "9", "section_name": "3.3.1 NEURAL NETWORK BASELINES", "section_text": "To understand the difficulty of the task for neural network models, we experiment with two neural\n\nnetwork baselines: the sequence-to-sequence model (Sutskever et al.|/2014) and a modified version\nof the pointer networks (Vinyals et al. The input to the sequence-to-sequence model is a\nconcatenation of the table and the question, and the decoder produces the output one token at a time.\n\nWe consider only examples whose input length is less than 400 to make the running time reasonable.\nThe resulting dataset has 8,857 and 1,623 training and development examples respectively. The\naccuracy of the best model on this development set after hyperparameter tuning is only 8.9%. Next,\nwe experiment with pointer networks to select entries in the table as the final answer. We modify\npointer networks to have two-attention heads: one to select the column and the other to select entries\nwithin a column. Additionally, the model performs multiple pondering steps on the table before\nreturning the final answer. We train this model only on lookup questions, since the model does not\nhave a decoder to generate answers. We consider only examples whose tables have less than 100\nrows resulting in training and development set consisting of 7, 534 and 1,829 examples respectively,\nThe accuracy of the best model on this development set after hyperparameter tuning is only 4.0%.\nThese results confirm our intuition that discrete operations are hard to learn for neural networks\nparticularly with small datasets in real-world settings."}, {"section_index": "10", "section_name": "3.4.1 MODEL ABLATION", "section_text": "Table | shows the impact of different model design choices on the final performance. Whil\nanonymizing phrases in the question that match some table entry seems to have a small positive\neffect, regularization has a much larger effect on the performance. Column selection is performec\nin|Neelakantan et al.| using only the name of a column; however, this selection procedure i\ninsufficient in real-world settings. For example the column selected in question 3 in Table [3] doe:\nnot have a corresponding phrase in the question. Hence, to select a column we additionally use\nboolean feature that indicates whether an entry in that column matches some phrase in the question\nTable ?2]shows that the addition of this boolean feature has a significant effect on performance.\nwhich section Is longest! !\nTable 3: A few examples of programs induced by Neural Programmer that generate the correct\nanswer in the development set. mfe is abbreviation for the operation most frequent entry. The model\nruns for 4 timesteps selecting an operation and a column at every step. The model employs hard\nselection during evaluation. The column name is displayed in the table only when the operation\npicked at that step takes in a column as input while the operation is displayed only when it is other\nthan the reset operation. Programs that choose count as the final operation produce a number as the\nfinal answer while programs that select print as the final operation produce entries selected from the\ntable as the final answer.\nID | Question | || Step! | Step2 | Step3 Step 4\n\n1 | what is the total number of | Operation - - - count\nteams? Column - - = ~\n\n2 | how many games had more | Operation - - >= count\nthan 1,500 in attendance? Column - - attendance -\n\n3 | what is the total number | Operation - - select count\nof runner-ups listed on the\nchart? Column - - outcome -\n\n4 | which year held the most | Operation - - mfe print\ncompetitions? Column - - year year\n\n5] what opponent is listed last | Operation last - Tast print\non the table? Column - - - opponent\n\n6 which section is longest?? Operation - : argmax print\n\n\u00a9 Column - - kilometers name\n\n7 | which engine(s) has the least | Operation - - argmin print\namount of power? Column - - power engine\n\n8 | what was claudia roll\u2019s | Operation - - select print\ntime? Column - - swimmer time\n\n9 | who had more silver medals, | Operation || argmax select argmax print\ncuba or brazil? Column nation nation silver nation\n\n10 | who was the next appointed | Operation select next last print\ndirector after lee p. brown? Column name - - name\n\n11 | what team is listed previous | Operation select | previous first print\nto belgium? Column team - - team\nTable 4: Statistics of the different sequence of operations among the examples answered correctly\nby the model in the development set. For each sequence of operations in the table, we also point\nto corresponding example programs in Table3] Superlative operations include argmax and argmin\nwhile comparison operations include greater than, less than, greater than or equal to and less thar\nor equal to. The model induces a program that results in a scalar answer 30.7% of the time while\nthe induced program is a table lookup for the remaining questions. print and select are the two mos\ncommon operations used 69.3% and 66.7% of the time respectively."}, {"section_index": "11", "section_name": "3.4.2 INDUCED PROGRAMS", "section_text": "Table [3] shows few examples of programs induced by Neural Programmer that yield the correc\nanswer in the development set. The programs given in Table |3|show a few characteristics of th\nlearned model. First, our analysis indicates that the model can adapt to unseen column names at tes\ntime. For example in Question 3, the word outcome occurs only 8 times in the training set and i\nreplaced with the unknown word token. Second, the model does not always induce the most efficien\n(with respect to number of operations other than the reset operation that are picked) program to solv\na task. The last 3 questions in the table can be solved using simpler programs. Finally, the mode\ndoes not always induce the correct program to get the ground truth answer. For example, the last \u2018\nprograms will not result in the correct response for all input database tables. The programs woul\nproduce the correct response only when the select operation matches one entry in the table."}, {"section_index": "12", "section_name": "3.4.3 CONTRIBUTION OF DIFFERENT OPERATIONS", "section_text": "Table|4]shows the contribution of the different operations. The model induces a program that results\nin a scalar answer 30.7% of the time while the induced program is a table lookup for the remaining\nquestions. The two most commonly used operations by the model are print and select."}, {"section_index": "13", "section_name": "3.4.4 ERROR ANALYSIS", "section_text": "To conclude this section, we suggest ideas to potentially improve the performance of the model\nFirst, the oracle performance with 15 Neural Programmer models is 50.5% on the development se\nwhile averaging achieves only 37.5% implying that there is still room for improvement. Next, thi\naccuracy of a single model on the training set is 53% which is about 20% higher than the accurac\u2019\nin both the development set and the test set. This difference in performance indicates that the mode\nsuffers from significant overfitting even after employing strong regularization. It also suggests tha\nthe performance of the model could be greatly improved by obtaining more training data. Neverthe\nless, there are limits to the performance improvements we may reasonably expect: in particular, a\nshown in previous work 2015), 21% of questions on a random set of 200 exam\nples in the considered dataset are not answerable because of various issues such as annotation error\nand tables requiring advanced normalization.\nOperation Program in Table[3] Amount (%)\nScalar Answer\nOnly Count 1 6.5\nComparison + Count 2 2.1\nSelect + Count 3 22.1\nScalar Answer 1,2,3 30.7\nLookup Answer\nMost Frequent Entry + Print 4 L7\nFirst/Last + Print 5 9.5\nSuperlative + Print 6,7 13.5\nSelect + Print 8 17.5\nSelect + {first, last, previous, next, superlative} + Print 9-11 27.1\n4-11 69.3\n\nLookup Answer"}, {"section_index": "14", "section_name": "4 OTHER RELATED WORK", "section_text": "While we discuss in detail various semantic parsing and neural program induction techniques in\nSection 1, here we briefly describe other relevant work. Recently, [Kocisky et al-| (2016) develop\na semi-supervised semantic parsing method that uses question-program pairs as supervision. Con-\ncurrently to our work, {Liang et al.|(2016) propose neural symbolic machine, a model very similar\nto Neural Programmer but trained using the REINFORCE algorithm { [1992). They use\nonly 2 discrete operations and run for a total of 3 timesteps, hence inducing programs that are much\nsimpler than ours. Neural networks have also been applied on question-answering datasets that do\nnot require much arithmetic reasoning (Bordes et al.||2014} |Iyyer et al.\n\n6) use a neu-\n\n(2015}|Peng et al.}[2015} Hermann et al|/2015, 2016). Wang & Fiang|(2016\nral network model to get state-of-the-art results on a reading comprehension task (Rajpurkar et al.\n\n(2016)."}, {"section_index": "15", "section_name": "5 CONCLUSION", "section_text": "In this paper, we enhance Neural Programmer to work with weaker supervision signals to mak\nit more broadly applicable. Soft selection during training enables the model to actively explor\nthe space of programs by backpropagation with superior sample complexity. In our experiments\nwe show that the model achieves performance comparable to a state-of-the-art traditional semanti:\nparser even though the training set contains only 10,000 examples. To our knowledge, this is the\nfirst instance of a weakly supervised, end-to-end neural network model that induces programs on\nreal-world dataset."}, {"section_index": "16", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly\nlearning to align and translate. JCLR, 2014.\nAndrew M Dai and Quoc V Le. Semi-supervised sequence learning. NJPS, 2015.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa\nSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. NIPS, 2015.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural Computation, 199\nAcknowledgements We are grateful to Panupong Pasupat for answering numerous questions\nabout the dataset, and providing pre-processed version of the dataset and the output of the semantic\nparser. We thank David Belanger, Samy Bengio, Greg Corrado, Andrew Dai, Jeff Dean, Nando de\nFreitas, Shixiang Gu, Navdeep Jaitly, Rafal Jozefowicz, Ashish Vaswani, Luke Vilnis, Yuan Yu anc\nBarret Zoph for their suggestions and the Google Brain team for the support. Arvind Neelakantan is\nsupported by a Google PhD fellowship in machine learning.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre-\ngory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Good-\nfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal J\u00e9zefowicz, Lukasz\nKaiser, Manjunath Kudlur, Josh Levenberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Gor-\ndon Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal\nTalwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Vi\u00e9gas, Oriol Vinyals,\nPete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaogiang Zheng. Tensorflow:\nLarge-scale machine learning on heterogeneous distributed systems. ArXiv, 2016.\nJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural\nnetworks for question answering. NAACL, 2016.\nAntoine Bordes, Sumit Chopra, and Jason Weston. Question answering with subgraph embeddings.\nEMNLP, 2014.\nYarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent\nneural networks. NIJPS, 2016.\nMohit Iyyer, Jordan L. Boyd-Graber, Leonardo Max Batista Claudino, Richard Socher, anc\nHal Daum\u00e9 II. A neural network for factoid question answering over paragraphs. EMNLP\n2014.\nRobin Jia and Percy Liang. Data recombination for neural semantic parsing. ACL, 2016.\nLukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. JCLR, 2016.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. JCLR, 2014\nJayant Krishnamurthy and Thomas Kollar. Jointly learning to parse and perceive: Connecting natural\nlanguage to the physical world. TACL, 2013.\nTom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. Scaling semantic parsers with\non-the-fly ontology matching. EMNLP, 2013.\nChen Liang, Jonathan Berant, Quoc Le, Kenneth Forbus, and Ni Lao. Neural symbolic machines:\nLearning semantic parsers on freebase with weak supervision. NAMPI Workshop, NIPS. 2016.\nPercy Liang. Learning executable semantic parsers for natural language understanding. ACM, 2016\nPercy Liang, Michael I. Jordan, and Dan Klein. Learning dependency-based compositional seman.\ntice ACT 9011\nPanupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables.\nACL, 2015.\nBaolin Peng, Zhengdong Lu, Hang Li, and Kam-Fai Wong. Towards neural network-based reason-\ning. ArXiv, 2015.\nScott Reed and Nando De Freitas. Neural programmer-interpreters. JCLR, 2016.\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov\nDropout: A simple way to prevent neural networks from overfitting. JMLR, 2014.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. NJPS, 2015\nMohit lyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. Deep unordered compo-\nsition rivals syntactic methods for text classification. ACL, 2015.\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent\nnets. NJPS, 2015.\nTomas Kocisky, Gabor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and\nKarl Moritz Hermann. Semantic parsing with semi-supervised sequential autoencoders. ArXiv,\n2016.\nAnkit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On-\ndruska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks for\nnatural language processing. JCML, 2016.\nArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Neural programmer:\nInducing latent programs with gradient descent. JCLR, 2016.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100, 000+ questions\nfor machine comprehension of text. ArXiv, 2016.\nIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks.\nNIPS, 2014.\nYushi Wang, Jonathan Berant, and Percy Liang. Building a semantic parser overnight. ACL, 2015.\nPengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. Neural enquirer: Learning to query table\nwith natural language. ArXiv, 2015.\nJohn M. Zelle and Raymond J. Mooney. Learning to parse database queries using inductive logic\nprogramming. AAAIJAAT, 1996.\nLuke S. Zettlemoyer and Michael Collins. Learning to map sentences to logical form: Structured\nclassification with probabilistic categorial grammars. UAI, 2005.\nTable|5]shows the list of operations built into the model along with their definitions."}, {"section_index": "17", "section_name": "ROW SELECTOR", "section_text": "As discussed in Section 2.3, the output variables scalar answer and lookup answer are calculated us-\ning the output of the count operations and print operation respectively. The row selector is computed\nusing the output of the remaining operations and is given by,\nwhere a7? (op) and af?! (7) are the probabilities assigned by the selector to operation op and colum:\nj at timestep t respectively.\nType Operation Definition\nM\nAggregate count count, = >> row-select,\u2014 1 [i]\ni=l\n: argmax max,|t][j] = max(0.0, row_select,\u2014 1\nS lati f\n\"pena SM, (Hl) [7] < TA] [y]] x row-select:\u20141[k])) Cc\nargmin min, [i][7] = max(0.0, row\nDv Coli] > Te] ly] x row-selecty_ 18D) Cc\n> uJ j| > pivots, V\naris < j] < pivot), V(t, 7),\nComparison > TS pivot, Wi Ft\n< j| < pivotic, VG, j),% Zz ; :\nselect sla][j] = LOGIT TTI appears in question else 0.0,\n. . mfe ta most common entry in column j else 0.0.\nTable Ops Mj =1,....C\nfirst i] = max(0.0, row_select:\u2014 1 [i] \u2014 Ya row-_select:\u20141[j]),\nlast laz{i] = max(0.0, row_select,\u2014 1 [i] \u2014 Sy 41 row_select,\u20141[j]),\nprevious row_select;\u2014;[t + I]t M\u2014T1;p.[M]=0\nnext = row_selec' i\u2014 I], M;n[1] =\nPrint print lookup answer [i][j] = row_select;\u2014 [i], VG, 7)i = 1,..., M,j =1,...,C\nReset reset ri] = 1,Vi = 1,2,...,M\nTable 5: List of all operations provided to the model along with their definitions. mfe is abbreviation\nfor the operation most frequent entry. [cond] is 1 when cond is True, and 0 otherwise. Comparison,\nselect, reset and mfe operations are independent of the timestep while all the other operations are\ncomputed at every time step. Superlative operations and most frequent entry are computed within a\ncolumn. The operations calculate the expected output with the respect to the membership probabili-\nties given by the row selector so that they can work with probabilistic selection.\nCc\nlector |i] = dAai\"(a eee? (>) gli] + of Har? (<) AL]\n\n+ af\"(J)ay? ()gelilld] + af\" (Har? (S)lelil UI,\n\n+04 (Jap? (argmax)marny[i][j] + af\u00b0'(j)ae? (argmine)min{i][j],\n+ a4\" (jap? (select) s[i][j] + af\" (A)az? (mfe)mfeli][i]}\n\n+ af? (previous) pz[i] + af? (next) nj [i] + af? (reset)r; [i]\n\n+ af? (first) fi, [i] + af? (Last) la, {i]\n\nVi,i=1,2,...,M\n\np) and a&\u00b0!(7) are the probabilities assigned by the selector to operation op and colur\nrow selector;(t]\n\n= Llei\"a af\u201d (Jalal) + af\" (Aor? (<)EAL]\n\n+ ai iat? (2) geld Li] + of\" Har? (S)leli] Ud,\n\n+04 (Jap? (argmax)marny[i][j] + af\u00b0'(j)ae? (argmine)min{i][j],\n+ a7 (J)ap\u201d (select) s[i][j] + af (Aor \u201c(mfe)m\u00e9elil)}\n\n+ af? (previous) pz[i] + af? (next) nj [i] + af? (reset)r; [i]\n\n+ af? (first) fi, [i] + af? (Last) la, {i]\n\nVi,i =1,2,...,M"}]
SyWvgP5el
[{"section_index": "0", "section_name": "EPOPT: LEARNING ROBUST NEURAL NETWORI\nPOLICIES USING MODEL ENSEMBLES", "section_text": "Aravind Rajeswaran!, Sarvjeet Ghotra\u2019, Balaraman Ravindran\u00ae*, Sergey Levine\u2019\nSample complexity and safety are major challenges when learning policies with\nreinforcement learning for real-world tasks, especially when the policies are repre:\n\nsented using rich function aj\nmethods where the real-wo:\n\npproximators like deep neural networks. Model-based\nrid target domain is approximated using a simulated\n\nsource domain provide an avenue to tackle the above challenges by augmenting real\ndata with simulated data. However, discrepancies between the simulated source\ndomain and the target domain pose a challenge for simulated training. We introduce\n\nthe EPOpt algorithm, whic!\n\nh uses an ensemble of simulated source domains and\n\na form of adversarial training to learn policies that are robust and generalize to a\nbroad range of possible target domains, including unmodeled effects. Further, the\nprobability distribution over source domains in the ensemble can be adapted using\n\ndata from target domain and\nit a better approximation. T!\n\napproximate Bayesian methods, to progressively make\nhus, learning on a model ensemble, along with source\n\ndomain adaptation, provides the benefit of both robustness and learning/adaptation"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Reinforcement learning with powerful function approximators like deep neural networks (deep RL\n\nhas recently demonstrated remarkable success in a wide range of tasks like games (Mnih et al. {2015\nSilver et al.|/2016), simulated control problems (Lillicrap et al] [2015b), an\ngraphics (Peng et al.| (2016). However, high sample complexity is a major barrier for directly applyin:\nmodel-free deep RL methods for physical control tasks. Model-free algorithms like Q-learnins\nactor-critic, and policy gradients are known to suffer from long learning times (Kakade}/2003), whic\nis compounded when used in conjunction with expressive function approximators like deep neura\nnetworks (DNNs). The challenge of gathering samples from the real world is further exacerbate:\nby issues of safety for the agent and environment, since sampling with partially learned policie\ncould be unstable (Garcia & Fernandez||2015). Thus, model-free deep RL methods often require\nprohibitively large numbers of potentially dangerous samples for physical control tasks.\nModel-based methods, where the real-world target domain is approximated with a simulated source\ndomain, provide an avenue to tackle the above challenges by learning policies using simulated data\nThe principal challenge with simulated training is the systematic discrepancy between source anc\ntarget domains, and therefore, methods that compensate for systematic discrepancies (modeling\nerrors) are needed to transfer results from simulations to real world using RL. We show that the\nimpact of such discrepancies can be mitigated through two key ideas: (1) training on an ensemble\nof models in an adversarial fashion to learn policies that are robust to parametric model errors, as\nwell as to unmodeled effects; and (2) adaptation of the source domain ensemble using data from\nthe target domain to progressively make it a better approximation. This can be viewed either as ar\ninstance of model-based Bayesian RL (Ghavamzadeh et al.| 2015); or as transfer learning from <\ncollection of simulated source domains to a real-world target domain (Taylor & Stone} Stone (2009). While\na number of model-free RL algorithms have been proposed (see, e.g.,/Duan et al.|(2016) for a survey)\n\ntheir high sample complexity demands use of a simulator, effectively Puan eta fen model-based. We"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper, we propose the Ensemble Policy Optimization (EPOpt\u2014e) algorithm for finding policies\nthat are robust to model mismatch. In line with model-based Bayesian RL, we learn a policy for the\ntarget domain by alternating between two phases: (i) given a source (model) distribution (i.e. ensemble\nof models), find a robust policy that is competent for the whole distribution; (ii) gather data from the\ntarget domain using said robust policy, and adapt the source distribution. EPOpt uses an ensemble\nof models sampled from the source distribution, and a form of adversarial training to learn robust\npolicies that generalize to a broad range of models. By robust, we mean insensitivity to parametric\nmodel errors and broadly competent performance for direct-transfer (also referred to as jumpstari\nlike in[Taylor & Stone} 2009). Direct-transfer performance refers to the average initial performance\n(return) in the target domain, without any direct training on the target domain. By adversarial training\nwe mean that model instances on which the policy performs poorly in the source distribution are\nsampled more often in order to encourage learning of policies that perform well for a wide range of\nmodel instances. This is in contrast to methods which learn highly optimized policies for specific\nmodel instances, but brittle under model perturbations. In our experiments, we did not observe\nsignificant loss in performance by requiring the policy to work on multiple models (for example\nthrough adopting a more conservative strategy). Further, we show that policies learned using EPOpt\nare robust even to effects not modeled in the source domain. Such unmodeled effects are a major\nissue when transferring from simulation to the real world. For the model adaptation step (ii), we\npresent a simple method using approximate Bayesian updates, which progressively makes the source\ndistribution a better approximation of the target domain. We evaluate the proposed methods on the\nhopper (12 dimensional state space; 3 dimensional action space) and half-cheetah (18 dimensional\nstate space; 6 dimensional action space) benchmarks in MuJoCo. Our experimental results suggest\nthat adversarial training on model ensembles produces robust policies which generalize better than\npolicies trained on a single, maximum-likelihood model (of source distribution) alone.\nWe consider parametrized Markov Decision Processes (MDPs), which are tuples of the form\nM(p) =< S,A,Tp, Rp, 7; So,\u00bb > where S, A are (continuous) states and actions respectively:\nTy Rp, and So, are the state transition, reward function, and initial state distribution respectively, all\nparametrized by p; and \u00a5 is the discount factor. Thus, we consider a set of MDPs with the same state\nand action spaces. Each MDP in this set could potentially have different transition functions, rewards\n\nand initial state distributions. We use transition functions of the form $;,1 = 7;,(s;,a;,) where T, is\narandom process and S;4.; is a random variable.\nWe distinguish between source and target MDPs using M and W respectively. We also refer to M\nand W as source and target domains respectively, as is common in the transfer learning set-up. Ow\nobjective is to learn the optimal policy for W; and to do so, we have access to M(p). We assume\nthat we have a distribution (D) over the source domains (MDPs) generated by a distribution ove:\nthe parameters P = P(p) that capture our subjective belief about the parameters of W. Let P be\nparametrized by 7 (e.g. mean, standard deviation). For example, M could be a hopping task witl\nreward proportional to hopping velocity and falling down corresponds to a terminal state. For thi:\ntask, p could correspond to parameters like torso mass, ground friction, and damping in joints, al\nof which affect the dynamics. Ideally, we would like the target domain to be in the model class, i.e\n{ap | M(p) = W}. However, in practice, there are likely to be unmodeled effects, and we analyze\nthis setting in our experiments. We wish to learn a policy 73 (s) that performs well for all M ~ T\nNote that this robust policy does not have an explicit dependence on p, and we require it to perform\nwell without knowledge of p."}, {"section_index": "3", "section_name": "3 LEARNING PROTOCOL AND EPOPT ALGORITHM", "section_text": "We follow the round-based learning protocol of Bayesian model-based RL. We use the term rounds\nwhen interacting with the target domain, and episode when performing rollouts with the simulator. In\neach round, we interact with the target domain after computing the robust policy on the current (i.e\nshow in our experiments that such methods learn policies which are highly optimized for the specific\nmodels used in the simulator, but are brittle under model mismatch. This is not surprising, since deep\nnetworks are remarkably proficient at exploiting any systematic regularities in a simulator. Addressing\nrobustness of DNN-policies is particularly important to transfer their success from simulated tasks to\nphysical systems.\nposterior) simulated source distribution. Following this, we update the source distribution using date\nfrom the target domain collected by executing the robust policy. Thus, in round 7, we update two set:\nof parameters: 6;, the parameters of the robust policy (neural network); and 7;, the parameters of the\nsource distribution. The two key steps in this procedure are finding a robust policy given a source\ndistribution; and updating the source distribution using data from the target domain. In this section\nwe present our approach for both of these steps."}, {"section_index": "4", "section_name": "3.1 ROBUST POLICY SEARCH", "section_text": "We introduce the EPOpt algorithm for finding a robust policy using the source distribution. EPOpt is\na policy gradient based meta-algorithm which uses batch policy optimization methods as a subroutine\nBatch policy optimization algorithms (Williams| 2015) collect\na batch of trajectories by rolling out the current policy, and use the trajectories to make a policy\nupdate. The basic structure of EPOpt is to sample a collection of models from the source distribution\nsample trajectories from each of these models, and make a gradient update based on a subset of\nsampled trajectories. We first define evaluation metrics for the parametrized policy, 7a:\nnp(0) = Epwp [nm (0,p)| = Eprp | E+\n\nT-1\nSs a'r 1 (st, ar)\nt=0\n\nT-1\nSs y'rilse, a :\nt=0\nIn (1), 2(@,p) is the evaluation of 7g on the model M(p), with 7 being trajectories generated\nby M(p) and 79: 7 = {s:,a4,T:}4.9 where st41 ~ Tp(st,@2), 80 ~ Sop. Te ~ Rp(Se, a2), and\na, ~ To(sz). Similarly, 7p(@) is the evaluation of 7\u00bb over the source domain distribution. The\ncorresponding expectation is over trajectories T generated by D and 79: T = {81, a1, 17: }4-9, where\nS41 \u00a9 Tp, (St, Gt), Peri = Ptr 80 ~ So,por Te ~ Rp, (St; 4t), at ~ 7o(Sz), and po ~ P. With this\nmodified notation of trajectories, batch policy optimization can be invoked for policy search.\nmax nu (6,p)P(p)dp \u2014s.t.. P(nm(O,P) <y) =\u20ac\ny JF)\nwhere F(0) = {p| nm (9,p) < y} is the set of parameters corresponding to models that produce the\nworst \u20ac percentile of returns, and provides the limit for the integral; 7,4(0, P) is the random variabl\nof returns, which is induced by the distribution over model parameters; and \u20ac is a hyperparamete:\nwhich governs the level of relaxation from max-min objective. The interpretation is that (2) maximizes\nthe expected return for the worst \u00a2-percentile of MDPs in the source domain distribution. We adap\u2019\nthe previous policy gradient formulation to approximately optimize the objective in (2). The resulting\nalgorithm, which we call EPOpt-e, generalizes learning a policy using an ensemble of source MDP:\nwhich are sampled from a source domain distribution.\nIn Algorithm 1, R(t) = an y'r:,, denotes the discounted return obtained in trajectory samp\nTx. In line 7, we compute the e\u2014percentile value of returns from the N trajectories. In line 8, we\nfind the subset of sampled trajectories which have returns lower than Q,. Line 9 calls one step o!\nan underlying batch policy optimization subroutine on the subset of trajectories from line 8. For th\u00ab\nCVaR objective, it is important to use a good baseline for the value function. 2015\nshow that without a baseline, the resulting policy gradient is biased and not consistent. We use <\nlinear function as the baseline with a time varying feature vector to approximate the value function\n\nsimilar tofDuan et al] (2016). The parameters of the baseline are estimated using only the subset o\nn less\n\ntrajectories with retur than Q.. We found that this approach led to empirically good results.\nFor small values of \u20ac, we observed that using the sub-sampling step from the beginning led to unstable\nlearning. Policy gradient methods adjust parameters of policy to increase probability of trajectories\nT-1\nSs y're(se, a)\nt=0\n\nnM (8, p) = Ez\nnM (0, p) = Ez\n\nE;\n\n[na 8, P)] = Epp\n\n8\n\nt\n\nT-1\nSs y're(se, a)\nt=0\n\n=1\na'r 1 (st, ar)\n\nIL\n\u00b0\n\nT-1\nSs y'ri(se, a :\nt=0\n\n(1)\nOptimizing 7p allows us to learn a policy that performs best in expectation over models in the source\ndomain distribution. However, this does not necessarily lead to a robust policy, since there could be\nhigh variability in performance for different models in the distribution. To explicitly seek a robust\npolicy, we use a softer version of max-min obiretine suggested i in robust control, and optimize for the\nconditional value at risk (CVaR) (Tamar et al.][20\nAlgorithm 1: EPOpt\u2014e for Robust Policy Search\n_ = ee\n\n1 Input: w, 90, niter, N, \u20ac\n\n2 for iteration i = 0,1,2,...niter do\n\n3 fork = 1,2,...N do\n\n4 sample model parameters p, ~ Py\n\nsample a trajectory 7, = {s1, 41,71, 5141}429. from M (px) using policy 7(6;)\n\nwu\n\n6 end\n\n7 compute Q, = \u20ac percentile of { R(t) }i_1\n8 select sub-set T = {7% : R(tK) < Qe}\n\n9 Update policy: 0:41 = BatchPolOpt(0;, T)\nwith high returns and reduce probability of poor trajectories. EPOpt\u2014e due to the sub-sampling step\nemphasizes penalizing poor trajectories more. This might constrain the initial exploration needed\nto find good trajectories. Thus, we initially use a setting of \u00ab = 1 for few iterations before setting\nepsilon to the desired value. This corresponds to exploring initially to find promising trajectories and\nrapidly reducing probability of trajectories that do not generalize.\n1\nP(Plr) = 5 x P(wIP) x PP) = 5 x I P(Si41 = strls\u2019), af\"), p) x P(P\nP(S141|S\u00a2, 4, P) = Tp(St, a):\nP(pi\\Te) \u00ab L(TE|pi Po(pi)\npilre) & L(re|pi) x Bee"}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "\u2018Supplementary video: https: //youtu.be/wlYJ9vwaoto\nIn line with model-based Bayesian RL, we can adapt the ensemble distribution after observing\ntrajectory data from the target domain. The Bayesian update can be written as:\nwhere Pp(p;) is the probability of drawing p; from the prior distribution; and \u00a3(7,|p;) is the likeli-\nhood of generating the observed trajectory with model parameters p;. The weighted samples from the\nposterior can be used to estimate a parametric model, as we do in this paper. Alternatively, one could\napproximate the continuous probability distribution using discrete weighted samples like in case of par-\nticle filters. In cases where the prior has very low probability density in certain parts of the parameter\nspace, it might be advantageous to choose a sampling distribution different from the prior. The like-\nlihood can be factored using the Markov property as: C(7%|pi) = [], P(Si41 = 3) |s\\*, a\u201d) pj).\nThis simple model adaptation rule allows us to illustrate the utility of EPOpt for robust policy search,\nas well as its integration with model adaptation to learn policies in cases where the target model could\nbe very different from the initially assumed distribution.\nhigh dimensionality, and contact discontinuities make these tasks challenging reinforcement learning\nbenchmarks. These challenges when coupled with systematic parameter discrepancies can quickly\ndegrade the performance of policies and make them unstable, as we show in the experiments. The\nbatch policy optimization sub-routine is implemented using TRPO. We parametrize the stochastic\npolicy using the scheme presented in [Schulman et al.| (2015). The policy is represented with a\nGaussian distribution, the mean of which is parametrized using a neural network with two hidden\nlayers. Each hidden layer has 64 units, with a tanh non-linearity, and the final output layer is made of\nlinear units. Normally distributed independent random variables are added to the output of this neural\nnetwork, and we also learn the standard deviation of their distributions. Our experiments are aimed at\nanswering the following questions:"}, {"section_index": "6", "section_name": "4.1 COMPARISON TO STANDARD POLICY SEARCH", "section_text": "In Figure [I] we evaluate the performance of standard TRPO and EPOpt(\u00a2\u00ab = 0.1) on the hopper\ntask, in the presence of a simple parametric discrepancy in the physics of the system between the\ntraining (source) and test (target) domains. The plots show the performance of various policies or\ntest domains with different torso mass. The first three plots show policies that are each trained or\na single torso mass in the source domain, while the last plot illustrates the performance of EPOpt\n4000\n3500 \u2014 m=\n@ 3000\ne\n\u00a9 2500\nE 2000\n\u00a9 1500\nG\n2 1000\n500\n0\n\n\u2014\u2014 Ensemble\n\n3 4 5 6 7 8 9 3. 4 5 6 7 8 9 3. 4 5 6 7 8 9 3.4 5 6 7 8 9\nTorso Mass Torso Mass Torso Mass Torso Mass\nFigure 1: Performance of hopper policies when testing on target domains with different torso masses\nThe first three plots (blue, green, and red) show the performance of policies trained with TRPO\non source domains with torso mass 3, 6, and 9, respectively (denoted by m = in the legend). The\nrightmost plot shows the performance of EPOpt(e = 0.1) trained on a Gaussian source distribution\nwith mean mass j: = 6 and standard deviation \u00a2 = 1.5. The shaded regions show the 10\" and 90\"\npercentile of the return distribution. Policies trained using traditional approaches on a single mass\nvalue are unstable for even slightly different masses, making the hopper fall over when trying tc\nmove forward. In contrast, the EPOpt policy is stable and achieves a high level of performance on the\nentire range of masses considered. Further, the EPOpt policy does not suffer from degradation ir\nperformance as a consequence of adopting a more robust policy.\n. How does the performance of standard policy search methods (like TRPO) degrade in the presenc\u00ab\nof systematic physical differences between the training and test domains, as might be the cas\nwhen training in simulation and testing in the real world?\n\n. Does training on a distribution of models with EPOpt improve the performance of the policy wher\ntested under various model discrepancies, and how much does ensemble training degrade overal\nperformance (e.g. due to acquiring a more conservative strategy)?\n\n. How does the robustness of the policy to physical parameter discrepancies change when using the\nrobust EPOpt-e variant of our method?\n\n. Can EPOpt learn policies that are robust to unmodeled effects \u2014 that is, discrepancies in physica\nparameters between source and target domains that do not vary in the source domain ensemble?\n\n. When the initial model ensemble differs substantially from the target domain, can the ensembk\nbe adapted efficiently, and how much data from the target domain is required for this?\nIn all the comparisons, performance refers to the average undiscounted return per trajectory or episode\n(we consider finite horizon episodic problems). In addition to the previously defined performance,\nwe also use the 10\" percentile of the return distribution as a proxy for the worst-case return.\n3600\n3000\n2400\n1800\n1200\n600\n\n06\nvs\nBL\naL\n99\n09\nvs\nsy\nCv\noe\noe\n\nEPOpt(\u00ab \u20140. 1)\n\nNORADHOMIAMTH\ndddddinnnnnn\n\n06\nvs\nBL\ncL\n99\n09\nvs\nsy\ncy\noF\noe\n\niy\nw\nicy\n=\n\u00b0\nfa)\n6\n-\n\na\nll\n~\nDp\n\u2018O}\nfe)\na\nwi\n\nNONDHAOANMTN\nddddddndnnn\n\nA\n\nNORAHOANMSTH\n\ndddddinnnnnn\n\nuonpi4\n\n06\nv's\neh\naL\n99\n09\nvs\nsy\nCy\n9\nOE\n\nMaximum Likelihood\nFigure 2: On the left, is an illustration of the simulated 2D hopper task studied in this paper. On\nright, we depict the performance of policies for various model instances of the hopper task. The\nperformance is depicted as a heat map for various model configurations, parameters of which are\ngiven in the x and y axis. The adversarially trained policy, EPOpt(e = 0.1), is observed to generalize\nto a wider range of models and is more robust.\nwhich is trained on a Gaussian mass distribution. The results show that no single torso mass value\nproduces a policy that is successful in all target domains. However, the EPOpt policy succeeds almost\nuniformly for all tested mass values. Furthermore, the results show that there is almost no degradation\nin the performance of EPOpt for any mass setting, suggesting that the EPOpt policy does not suffer\nsubstantially from adopting a more robust strategy."}, {"section_index": "7", "section_name": "4.2 ANALYSIS OF ROBUSTNESS", "section_text": "Table 1: Initial source domain distribution\nHopper Le o low high\nmass 6.0 15 30 9.0\nground friction 2.0 0.25 15 2.5\njointdamping 2.5 1.0 10 40\narmature 1.0 0.25 0.5 1.5\nHalf-Cheetah =u o low high\nmass 6.0 15 30 9.0\nground friction 0.5 0.1 0.3 0.7\njointdamping 1.5 05 OS 2.5\narmature 0.125 0.04 0.05 0.2\nNext, we analyze the robustness of policies trained using EPOpt on the hopper domain. For this\nanalysis, we construct a source distribution which varies four different physical parameters: torso\nmass, ground friction, foot joint damping, and joint inertia (armature). This distribution is presented\nin Table[I] Using this source distribution, we compare between three different methods: (1) standard\npolicy search (TRPO) trained on a single model corresponding to the mean parameters in Table\n(2) EPOpt(e = 1) trained on the source distribution; (3) EPOpt(e = 0.1) \u2014 i.e. the adversarially\ntrained policy, again trained on the previously described source distribution. The aim of the compari-\nson is to study direct-transfer performance, similar to the robustness evaluations common in robust\ncontroller design . Hence, we learn a policy using each of the methods, and then\ntest policies on different model instances (i.e. different combinations of physical parameters) without\nany adaptation. The results of this comparison are summarized in Figure[2| where we present the\nperformance of the policy for testing conditions corresponding to different torso mass and friction\nvalues, which we found to have the most pronounced impact on performance. The results indicate\nthat EPOpt(\u00a2 = 0.1) produces highly robust policies. A similar analysis for the 10\" percentile of the\nreturn distribution (softer version of worst-case performance), the half-cheetah task, and different \u20ac\nsettings are presented in the appendix.\n4000\n\n3500\n\n3000\n\nPerformance\n\n\u2014 Ensemble (unmodeled)\n= Maximum-Likelihood\n\n5 6 7\nTorso Mass\nFigure 3: Comparison between policies trained\non a fixed maximum-likelihood model with mass\n(6), and an ensemble where all models have the\nsame mass (6) and other parameters varying as\ndescribed in Table[I]"}, {"section_index": "8", "section_name": "4.3 ROBUSTNESS TO UNMODELED EFFECTS", "section_text": "To analyze the robustness to unmodeled effects, our next experiment considers the setting where\nthe source domain distribution is obtained by varying friction, damping, and armature as in Table\nbut does not consider a distribution over torso mass. Specifically, all models in the source domain\ndistribution have the same torso mass (value of 6), but we will evaluate the policy trained on\nthis distribution on target domains where the torso mass is different. Figure [3] indicates that the\nEPOpt(\u00ab = 0.1) policy is robust to a broad range of torso masses even when its variation is not\nconsidered. However, as expected, this policy is not as robust as the case when mass is also modeled\nas part of the source domain distribution."}, {"section_index": "9", "section_name": "4.4 MODEL ADAPTATION", "section_text": "The preceding experiments show that EPOpt can find robust policies, but the source distribution ir\nthese experiments was chosen to be broad enough such that the target domain is not too far from\nhigh-density regions of the distribution. However, for real-world problems, we might not have the\ndomain knowledge to identify a good source distribution in advance. In such settings, model (source\nadaptation allows us to change the parameters of the source distribution using data gathered from the\ntarget domain. Additionally, model adaptation is helpful when the parameters of the target domair\ncould change over time, for example due to wear and tear in a physical system. To illustrate mode\nadaptation, we performed an experiment where the target domain was very far from the high density\nregions of the initial source distribution, as depicted in Figure[4fa). In this experiment, the source\ndistribution varies the torso mass and ground friction. We observe that progressively, the source\ndistribution becomes a better approximation of the target domain and consequently the performance\nimproves. In this case, since we followed a sampling based approach, we used a uniform sampling\ndistribution, and weighted each sample with the importance weight as described in Section 3.2\nEventually, after 10 iterations, the source domain distribution is able to accurately match the targe\ndomain. Figure [4[b) depicts the learning curve, and we see that a robust policy with return more thar\n2500, which roughly corresponds to a situation where the hopper is able to move forward withou\nfalling down for the duration of the episode, can be discovered with just 5 trajectories from the targe\ndomain. Subsequently, the policy improves near monotonically, and EPOpt finds a good policy witt\njust 11 episodes worth of data from the target domain. In contrast, to achieve the same level o!\nperformance on the target domain, completely model-free methods like TRPO would require more\nthan 2 x 10\u00a2 trajectories when the neural network parameters are initialized randomly.\n3.0\n\n25\n\n20\n\nus\n\nLo\n\n3.0\n\n25\n\n20\n\nus\n\nLo\n\n=o\n\n=\n\n\u00a9-\n\nIteration 2\n\nIteration 7\n\no-\n\n\u00a9\n\nio 1S\n\n20 0\n\nTorso Mass\n\n(a)\n\n15\n\n20\n\n3500\n\n3000\n\n2500\n\nPerformance\n\n4\n\nIterations\n\n(b)\n\n6\n\n10\nFigure 4: (a) Visualizes the source distribution during model adaptation on the hopper task, where\nmass and friction coefficient are varied in the source domain. The red cross indicates the unknown\nparameters of the target domain. The contours in the plot indicate the distribution over models\n(we assume a Gaussian distribution). Lighter colors and more concentrated contour lines indicate\nregions of higher density. Each iteration corresponds to one round (episode) of interaction with the\ntarget domain. The high-density regions gradually move toward the true model, while maintaining\nprobability mass over a range of parameters which can explain the behavior of target domain\nFigure|4{b) presents the corresponding learning curve, where the shaded region describes the 10th\nand 90th percentiles of the performance distribution, and the solid line is the average performance."}, {"section_index": "10", "section_name": "5 RELATED WORK", "section_text": "Robust control is a branch of control theory which formally studies development of robust policies\n(2013). However, typically no distribution over\nsource or target tasks is assumed, and a worst case analysis is performed. Most results from this\nfield have been concentrated around linear systems or finite MDPs, which often cannot adequately\nmodel complexities of real-world tasks. The set-up of model-based Bayesian RL maintains a belief\nover models for decision making under uncertainty (Viassis et al.|/2012}|Ghavamzadeh et al. 2015).\nIn Bayesian RL, through interaction with the target domain, the uncertainty is reduced to find the\ncorrect or closest model. Application of this idea in its full general form is difficult, and requires\neither restrictive assumptions like finite MDPs (Poupart et al} 2006), gaussian dynamics (\nlet al.|[2008), or task specific innovations. Previous methods have also suggested treating uncertain\nmodel parameters as unobserved state variables in a continuous POMDP framework, and solving the\nPOMDP to get optimal exploration-exploitation trade-off (Duff| {2003} [Porta et al.|[2006). While this\napproach is general, and allows automatic learning of epistemic actions, extending such methods to\nlarge continuous control tasks like those considered in this paper is difficult.\nRisk sensitive RL methods (Delage & Mannor, have been proposed to ac\n\nas a bridge between robust control and Bayesian RL. These approaches allow for using subjectiv\nmodel belief priors, prevent overly conservative policies, and enjoy some strong guarantees typicall\nassociated with robust control. However, their application in high dimensional continuous contro\n\ntasks have not been sufficiently explored. We refer readers to|Garcia & Fernandez|(2015) for a surve\n\nof related risk sensitive RL methods in the context of robustness and safety.\nStandard model-based control methods typically operate by finding a maximum-likelihood estimate\n\nof the target model (Ljung}/1998} [Ross & Bagnell} 2012\n\npolicy optimization. Use of m ensembles to produce robust controllers was explored recently\n\nin robotics. |Mordatch et al.|(2015a) use a trajectory optimization approach and an ensemble with\nmodels\n\nsmall finite set o s; whereas we follow a sampling based direct policy search approach over a\ncontinuous distribution of uncertain parameters, and also show domain adaptation. Sampling based\napproaches can be applied to complex models and discrete MDPs which cannot be planned through\neasily. Similarly,[Wang et al.](2010) use an ensemble of models, but their goal is to optimize for\naverage case performance as opposed to transferring to a target MDP.[Wang et al.|(2010) use a hand\nengineered policy class whose parameters are optimized with CMA-ES. EPOpt on the other hand\ncan optimize expressive neural network policies directly. In addition, we show model adaptation\neffectiveness of the sub-sampling step (\u20ac < 1 case), and robustness to unmodeled effects, all of which\n\nare important for transfering to a target MDP.\n\n2013), followed by\nLearning of parametrized skills (da Silva et al.}|2012) is also concerned with finding policies fo\n\na distribution of parametrized tasks. However, this is primarily geared towards situations wher\ntask parameters are revealed during test time. Our work is motivated by situations where target tas.\nparameters (e.g. friction) are unknown. A number of methods have also been suggested to reduc\nsample complexity when provided with either a baseline policy (Thomas et al.||2015}|Kakade \u00a2\n2002), expert demonstration (Levine & Koltun] |2013}{Argall et al.{[2009), or approximat\nsimulator (Tamar et al.| (2006). These are complimentary to our work, in th\nsense that our policy, which has good direct-transfer performance, can be used to sample from th\ntarget domain and other off-policy methods could be explored for policy improvement.\nIn this paper, we presented the EPOpt-e algorithm for training robust policies on ensembles of source\ndomains. Our method provides for training of robust policies, and supports an adversarial training\nregime designed to provide good direct-transfer performance. We also describe how our approact\ncan be combined with Bayesian model adaptation to adapt the source domain ensemble to a targe\ndomain using a small amount of target domain experience. Our experimental results demonstrate\nthat the ensemble approach provides for highly robust and generalizable policies in fairly comple\u00bb\nsimulated robotic tasks. Our experiments also demonstrate that Bayesian model adaptation car\nproduce distributions over models that lead to better policies on the target domain than more standarc\nmaximum likelihood estimation, particularly in presence of unmodeled effects.\nAlthough our method exhibits good generalization performance, the adaptation algorithm we use\ncurrently relies on sampling the parameter space, which is computationally intensive as the number of\nvariable physical parameters increase. We observed that (adaptive) sampling from the prior leads to\nfast and reliable adaptation if the true model does not have very low probability in the prior. However\nwhen this assumption breaks, we require a different sampling distribution which could produce\nsamples from all regions of the parameter space. This is a general drawback of Bayesian adaptation\nmethods. In future work, we plan to explore alternative sampling and parameterization schemes\nincluding non-parametric distributions. An eventual end-goal would be to replace the physics\nsimulator entirely with learned Bayesian neural network models, which could be adapted with limited\ndata from the physical system. These models could be pre-trained using physics based simulators like\nMuJoCo to get a practical initialization of neural network parameters. Such representations are likely\nuseful when dealing with high dimensional inputs like simulated vision from rendered images o1\ntasks with complex dynamics like deformable bodies, which are needed to train highly generalizable\npolicies that can successfully transfer to physical robots acting in the real world."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Emo Todorov, Sham Kakade, and students of Emo Todorov\u2019s researcl\ngroup for insightful comments about the work. The authors would also like to thank Emo Todoro\n\nfor the MuJoCo simulator. Aravind Rajeswaran and Balaraman Ravindran acknowledge financia\nsupport from ILDS, IIT Madras."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning\nfrom demonstration. Robotics and Autonomous Systems, 57(5):469 \u2014 483, 2009.\nMarc Peter Deisenroth, Gerhard Neumann, and Jan Peters. A survey on policy search for robotic:\nFoundations and Trends in Robotics, 2(12):1\u2014142. 2013.\nErick Delage and Shie Mannor. Percentile optimization for markov decision processes with parametet\nuncertainty. Operations Research, 58(1):203\u2014213, 2010.\nYan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep\nreinforcement learning for continuous control. In JCML, 2016.\nMichael O. Duff. Design for an optimal probe. In JCML, 2003.\nTom Erez, Yuval Tassa, and Emanuel Todorov. Infinite-horizon model predictive control for periodic\ntasks with contacts. In Proceedings of Robotics: Science and Systems, 2011.\nJavier Garcia and Fernando Fernandez. A comprehensive survey on safe reinforcement learning\nJournal of Machine Learning Research, 2015.\nMohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, and Aviv Tamar. Bayesian reinforcement\nlearning: A survey. Foundations and Trends in Machine Learning, 8(5-6):359-483, 2015.\nSham Kakade. A natural policy gradient. In NJPS, 2001.\nSham Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University Collegs\nLondon, 2003.\nPieter Abbeel, Morgan Quigley, and Andrew Y. Ng. Using inaccurate models in reinforcement\nlearning. In JCML, 2006.\nSergey Levine and Vladlen Koltun. Guided policy search. In JCML, 2013.\nLennart Ljung. System Identification, pp. 163-173. Birkhauser Boston, Boston, MA, 1998.\nIgor Mordatch, Kendall Lowrey, Galen Andrew, Zoran Popovic, and Emanuel V. Todorov. Interactivi\ncontrol of diverse complex characters with neural networks. In NJPS. 2015b.\nJosep M. Porta, Nikos A. Vlassis, Matthijs T. J. Spaan, and Pascal Poupart. Point-based value iteratiot\nfor continuous pomdps. Journal of Machine Learning Research. 7:2329\u20142367. 2006.\nPascal Poupart, Nikos A. Vlassis, Jesse Hoey, and Kevin Regan. An analytic solution to discrete\nbayesian reinforcement learning. In JCML, 2006.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael Jordan, and Pieter Abbeel. Trust regio\npolicy optimization. In JCML, 2015.\nDavid Silver et al. Mastering the game of go with deep neural networks and tree search. Nature, 529\n(7587):484\u2014489, Jan 2016.\nAviv Tamar, Dotan Di Castro, and Ron Meir. Integrating a partial model into model free reinforceme!\nJearnine. Journal of Machine Learnine Research. 2012.\nNikos Vlassis, Mohammad Ghavamzadeh, Shie Mannor, and Pascal Poupart. Bayesian Reinforcement\nLearning, pp. 359-386. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.\nVolodymyr Mnih et al. Human-level control through deep reinforcement learning. Nature, 518(7540):\n529-533. Feb 2015.\nArnab Nilim and Laurent El Ghaoui. Robust control of markov decision processes with uncertain\ntransition matrices. Operations Research, 53(5):780\u2014798, 2005.\nXue Bin Peng, Glen Berseth, and Michiel van de Panne. Terrain-adaptive locomotion skills using\ndpaan rainfarcamant laarning ALM Treancnartinnc an Wranhirere (Denner CIMMPADYU INIA) WN1Kk\nMatthew E. Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey.\nJournal of Machine Learning Research, 10:1633\u20141685, December 2009.\nPawel Wawrzynski. Real-time reinforcement learning by sequential actor-critics and experience\nreplay. Neural Networks, 22:1484\u20141497, 2009.\nKemin Zhou, John C. Doyle, and Keith Glover. Robust and Optimal Control. Prentice-Hall, Inc.,\nUpper Saddle River, NJ, USA, 1996. ISBN 0-13-456567-3."}, {"section_index": "13", "section_name": "A APPENDIX", "section_text": "Hopper: The hopper task is to make a 2D planar hopper with three joints and 4 body parts hop\nforward as fast as possible (Erez et al.|[2011). This problem has a 12 dimensional state space and a 3\ndimensional action space that corresponds to torques at the joints. We construct the source domain\nby considering a distribution over 4 parameters: torso mass, ground friction, armature (inertia), and\ndamping of foot.\nHalf Cheetah: The half-cheetah task (Wawrzynski] |2009) requires us to make a 2D cheetah with\ntwo legs run forward as fast as possible. The simulated robot has 8 body links with an 18 dimensional\n\nstate space and a 6 dimensional action space that corresponds to joint torques. Again, we construct\nthe source domain using a distribution over the following parameters: torso and head mass, ground\nfriction, damping, and armature (inertia) of foot joints.\nA video demonstration of the trained policies on these tasks can be viewed here: |Supplimenrary video\n(https://youtu.be/wlYJ9vwaoto )\nReward functions: For both tasks, we used the standard reward functions implemented with\n\nOpenAI gym (Brockman et al.|[2016), with minor modifications. The reward structure for hopper\nr(s,a) = vz \u2014 0.001||a|? +d,\nFor the cheetah task, we use the reward function:\nthe alive bonus is 1 if head of cheetah is above \u20140.25 (relative to torso) and similarly episod\nterminates if the alive condition is violated.\nOur implementation of the algorithms and environments are public in this repository to facilitate\nreproduction of results: httos://github.com/aravindr93/robustRL\nFigure 5: Illustrations of the 2D simulated robot models used in the experiments. The hopper (a) and\nhalf-cheetah (b) tasks present the challenges of under-actuation and contact discontinuities. These\nchallenges when coupled with parameter uncertainties lead to dramatic degradation in the quality of\npolicies when robustness is not explicitly considered.\nwhere s are the states comprising of joint positions and velocities; a are the actions (controls); and v,,\nis the forward velocity. b is a bonus for being alive (b = 1). The episode terminates when Ztorso < 0.7\nor when |6,,| < 0.2 where @,, is the forward pitch of the body.\nr(s,a) = vz \u20140.1]lal|? + b,\n. Neural network architecture: We used a neural network with two hidden layers, each with 64 units\nand tanh non-linearity. The policy updates are implemented using TRPO.\n\n. Trust region size in TRPO: The maximum KL divergence between sucessive policy updates are\nconstrained to be 0.01\n3. Number and length of trajectory rollouts: In each iteration, we sample N = 240 models from the\nensemble, one rollout is performed on each such model. This was implemented in parallel or\nmultiple (6) CPUs. Each trajectory is of length 1000 \u2014 same as the standard implimentations of\nthese tasks in gym and rllab.\nThe results in Fig[I]and Fig[2|were generated after 150 and 200 iterations of TRPO respectively, wit\neach iteration consisting of 240 trajectories as specified in (3) above.\ni oe the performance of the three considered policies: viz. TRPO on mean parameter:\nEPOpt(e = 1), and EPOpt(e = 0.1). We similarly analyze the 10\" percentile of the return distributio\nas a proxy for worst-case analysis, which is important for a robust control policy (here, distributio!\nof returns for a given model instance is due to variations in initial conditions). The correspondin:\nresults are presented below:\neo 8\n$3s38\nSS\nmA\n\nMaximum Likelihood\n\n1800\n1200\n600\n\nNERQQHQANMS\n\nuonsiy\nFigure 6: 10\" percentile of return distribution for the hopper task. EPOpt(e = 0.1) clearly outper\nforms the other approaches. The 10\" of return distribution for EPOpt(\u00a2 = 0.1) also nearly overlap:\nwith the expected return, indicating that the policies trained using EPOpt(e = 0.1) are highly robus\nand reliable.\nA.4. ROBUSTNESS ANALYSIS FOR HALF-CHEETAH TASK\nFigure 7: Performance of policies for various model instances for the half-cheetah domain, similar to\nFigure[2] Again, it is observed that the adversarial trained policy is robust and generalizes well to all\nmodels in the source distribution.\n3600\n3000\n2400\n1800\n1200\n\nMaximum Likelihood\n\n2\n8\nS\n\nuonsiy\n\nTorso Mass\nFriction\n\nFriction\n\n0.3|\n0.34\n0.38\n0.42\n0.46\n\n0.5|\n0.54)\n0.58\n0.62,\n0.66\n\n0.7\n\n0.3|\n0.34)\n0.38\n0.42\n0.46\n\n0.5|\n0.54\n\n0.58\n0.62\n\n0.7\n\nMaximum Likelihood\n\n0.3\n0.34\n0.38\n0.42\n0.46\n\n0.5\n0.54\n0.58\n0.62\n0.66\n\n0.7\n\nMaximum Likelihood\n\nEPOpt(<\u2014 0.1 5000\n[--\n3000\n2000\n\n1000\n\nTorso Mass\n\nEPOpt(\u00ab\u2014 0.1 5000\n[--\n\n3000\n2000\n\n1000"}, {"section_index": "14", "section_name": "A.5 DIFFERENT SETTINGS FOR \u20ac", "section_text": "Here, we analyze how different settings for \u20ac influences the robustness of learned policies. The\npolicies in this section have been trained for 200 iterations with 240 trajectory samples per iteration\nSimilar to the description in Section 3.1, the first 100 iterations use \u20ac = 1, and the final 100 iterations\nuse the desired \u20ac. The source distribution is described in Table 1. We test the performance on a grid\nover the model parameters. Our results, summarized in Table[2| indicate that decreasing \u20ac decreases\nthe variance in performance, along with a small decrease in average performance, and hence enhances\nrobustness.\nTable 2: Performance statistics for different \u20ac settings for the hopper task\nPerformance (Return)\n\n\u20ac mean | std Percentiles\n5 10 25 50 75 90\n\n0.05 2889 | 502 | 1662 2633 2841 2939 2966 3083\n0.1 3063 | 579 | 1618 2848 3223 3286 3336 3396\n0.2 3097 | 665 | 1527 1833 3259 3362 3423 3483\n0.3 3121 | 706 | 1461 1635 3251 3395 3477 3513\n0.4 3126 | 869 | 1013 1241 3114 3412 3504 3546\n0.5 3122 | 1009 | 984 1196 1969 3430 3481 3567\n0.75 3133 | 952 | 1005 1516 2187 3363 3486 3548\n1.0 3224 | 1060 | 1198 1354 1928 3461 3557 3604\nMax-Lik | 1710 | 1140 | 352 414 646 1323 3088 3272"}, {"section_index": "15", "section_name": "A.6 IMPORTANCE OF BASELINE FOR BATCHPOLOPT", "section_text": "As described in Section 3.1, it is important to use a good baseline estimate for the value function fo\nthe batch policy optimization step. When optimizing for the expected return, we can interpret th\nbaseline as a variance reduction technique. Intuitively, policy gradient methods adjust parameter\nof the policy to improve probability of trajectories in proportion to their performance. By using |\nbaseline for the value function, we make updates that increase probability of trajectories that perforn\nbetter than average and vice versa. In practice, this variance reduction is essential for getting polic\ngradients to work. For the CVaR case, [Tamar et al.] showed that without using a baseline\nthe policy gradient is biased. To study importance of the baseline, we first consider the case wher\nwe do not employ the adversarial sub-sampling step, and fix \u00ab = 1. We use a linear baseline with ;\ntime-varying feature vector as described in Section 3.1. Figure[\u00a7}a) depicts the learning curve for th\nsource distribution in Tableff] The results indicate that use of a baseline is important to make polic\ngradients work well in practice.\nNext, we turn to the case of \u00ab < 1. As mentioned in section 3.1, setting a low \u00a2 from the start lead\nto unstable learning. The adversarial nature encourages penalizing poor trajectories more, whic:\nconstrains the initial exploration needed to find promising trajectories. Thus we will \u201cpre-train\u201d b\nusing \u20ac = 1 for some iterations, before switching to the desired \u20ac setting. From Figure[8[{a), iti\nclear that pre-training without a baseline is unlikely to help, since the performance is poor. Thus, w\nuse the following setup for comparison: for 100 iterations, EPOpt(e = 1) is used with the baselin\nSubsequently, we switch to EPOpt(e = 0.1) and run for another 100 iterations, totaling 200 iteration\nThe results of this experiment are depicted in Figure[8{b). This result indicates that use of a baselin\nis crucial for the CVaR case, without which the performance degrades very quickly. We repeate:\nthe experiment with 100 iterations of pre-training with \u00ab = 1 and without baseline, and observed th\nsame effect. These empirical results reinforce the theoretical findings of{Tamar et al.|(2015).\nAs emphasized previously, EPOpt is a generic policy gradient based meta algorithm for finding robust\npolicies. The BatchPolOpt step (line 9, Algorithm 1) calls one gradient step of a policy gradient\nmethod, the choice of which is largely orthogonal to the main contributions of this paper. For the\n3000\n\n2500\n\ny\n8\n8\nS\n\n1500\n\nPerformance\n\n1000\n\n500\n\n\u2014 EPOpt(=1) with baseline\n\u2014 EPOpt(\u00ab=1) without baseline\n\n50\n\n100\nIterations\n\n(a)\n\n150\n\n200\n\n3500\n3000\n2500\n2000\n1500\n\nPerformance\n\n1000\n500\n\n50 100 150\nIterations\n\u2014 EPOpt(c = 1) with baseline\n\n\u2014 EPOpt(\u00ab =0. 1) with baseline\n+++ EPOpt(\u00ab=0. 1) without baseline\n\n(bh)\n\n200\nPerformance\n\n3000:\n\n2500\n\n2000\n\n1500\n\n1000\n\n500\nFigure 8: (a) depicts the learning curve for EPOpt(\u00ab = 1) with and without baselines. The learning\ncurves indicate that use of a baseline provides a better ascent direction, thereby enabling faster\nlearning. Figure[8{b) depicts the learning curve when using the average return and CVaR objectives.\nFor the comparison, we \u201cpre-train\u201d for 100 iterations with \u00ab = 1 setting and using a baseline. The\nresults indicates that a baseline is very important for the CVaR objective (\u00ab < 1), without which the\nperformance drops very quickly. Here, performance is the average return in the source distribution.\nPerformance\n\n\u2014\u2014 EPOpt(e=1) with TRPO\n\n\u2014\u2014 EPOpt(e=1) with REINFORCE\n\n50\n\n100 150\nIterations\n\n200\nPerformance\n\n\u2014\u2014 EPOpt(e=1) with TRPO\n\n\u2014\u2014 EPOpt(e=1) with REINFORCE\n\n50\n\n100 150\nIterations\n\n200\nBOURWIO}8d\nFigure 9: Learning curves for EPOpt(e = 1) when using the TRPO and REINFORCE methods for\nthe BatchPolOpt step.\nreported results, we have used TRPO as the policy gradient method. Here, we compare the results t\nthe case when using the classic REINFORCE algorithm. For this comparison, we use the same valu!\nfunction baseline parametrization for both TRPO and REINFORCE. Figure[9|depicts the learnin;\ncurve when using the two policy gradient methods. We observe that performance with TRPO i\nsignificantly better. When optimizing over probability distributions, the natural gradient can navigat\nthe warped parameter space better than the \u201cvanilla\u201d gradient. This observation is consistent with thi\n\nfindings of|Kakade 2001), Schulman et al. 2015), and|Duan et al. 2016)."}]
B1gtu5ilg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "The power of the human mind in inference and generalization rests on our brain\u2019s ability to develo\nmodels of abstract knowledge of the natural world (Tenenbaum et al.|[2011). When shown nove\nobjects, both children and adults can rapidly generalize from just a few examples to classify and grou\nthem based on their perceptual similarity. Understanding the processes that give rise to perceptuz\nsimilarity will provide insight into the development of abstract models in our brain. In this paper, w\n\nexplored computational models for understanding the neural basis of human perceptual similarit\njudgment.\nRecent deep convolutional neural networks (DCNNs) have produced feature representations in the\nhidden layers that can match well with neural representations observed in the primate and human\nvisual cortex. It was found that there is a strong correspondence between neural activities (neuronal\n\nspikes or {MRI signals) and the activities of the deep layers of deep networks (Agrawal et al.||2014|\nKhaligh-Razavi & Kriegeskorte}/2014}/Yamins et al.|{2014), suggesting that deep neural networks\n\nhave in fact learned meaningful representations that are close to humans\u2019, even though the neural"}, {"section_index": "1", "section_name": "TRANSFER OF VIEW-MANIFOLD LEARNING TO SIMI-\nLARITY PERCEPTION OF NOVEL OBJECTS", "section_text": "Zhihao Li, Yimeng Zhang\nDepartment of Computer Science\nCarnegie Mellon University\n(zhihaol, yimengzh}@andrew.cmu.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The DCNNs that neuroscientists and cognitive scientists have studied so far, such as AlexNe\n(Krizhevsky et al.||2012), were trained with static images with the goal of classifying objects in stati\nimages into different categories. Perceptual similarity judgment is obviously closely related to th\nmechanisms used in object classification\u2014we classify objects with similar attributes and appearance\ninto the same class, and thus object classification rests in part on our perceptual similarity judgmen\nand relies on physical, semantic abstract attributes common to objects in each class. Our perceptua\nsimilarity judgment might also be tied to our need for individual object recognition\u2014after all, w\nmight want to recognize an individual person or object, not just a class. It is obviously important t\nbe able to recognize one\u2019s own child or the cup one is using. The need to recognize an individua\nobject, independent of view points, requires fine discrimination of details, and might also be a ver\npotent force for shaping our perceptual similarity judgment\u2019s machinery.\nWe retrain a DCNN with object persistence constraints, using rendered 3D objects. We call thi:\netrained network Object Persistence Net (OPnet). During training, we utilize a Siamese network\narchitecture for incorporating object persistence constraints into the network. We demonstratec\n\u2018hat multi-view association training with a relatively small set of objects directly affects similarity\njudgment across many classes of objects, including novel objects that the network has not seer\noefore. Our contribution is to demonstrate the surprising transfer of learning of similarity judgmen\n\u2018o untrained classes of objects and a variety of completely artificial novel objects. We analyzec\nhe view-manifold fine-tuned with object persistence constraints to understand what changes have\n\u2018aken place in the feature representation of the OPnet that has resulted in the development of thi:\n\u2018emarkable transfer of perceptual similarity judgment to novel objects.\nCreating large sets of human labeled data on object similarity judgement is expensive. There has\nbeen a recent trend in exploring inherent information as supervisory signal, including using cycle\nconsistency for learning dense correspondance{Zhou et al.|/2015), camera motion for foreground\nsegmentation(Zeng et al.| and context information(Doersch et al. Among these, most\nrelated to our study is the work of/Wang & Guptal (2015) utilizing visual tracking results as supervisory\nsignals, which is an object persistence or continuity assumption, to learn deep networks without\nexplicit object labels. While the tracked patches can be partly regarded as multi-view images, the\nchanges in views tend to be very limited. In comparison, we used graphics rendered multi-view\nimages as object persistency constraint. Such a clean setup is necessary for us to study the effect of\nobject persistency constraint on novel objects, as well as the transferability of view-manifold learning\nto similarity perception.\nRecent approaches in representation learning of 3D shapes are also related to our work. Generative\nmodels such as (Wu et al.| and (Tatarchenko et al. learn a vector representation for\ngeneration of 3D shapes. Other approaches learn an embedding space for multi-view object retrieval\nnetworks are trained for object classification in computer vision. Cognitive neuroscientists have\nstarted to explore how the representations learned by deep networks can be used to model various\naspects of human perception such as memorability of objects in images (\n\ntypicality (Lake et al.|[2015), and similarity judgment\nns and hu\n\nCertain correspondence between deep net representatio man experimental results are found.\n\nIn particular, {Peterson et al.|(2016) found that human similarity judgment on a set of natural images\n\nmicht be similar to the feature representations in deep networks after some transformation.\nThe development of invariant object recognition has often been attributed to object continuity o1\npersistence in our visual experience. When we see an object, we tend to see it from different angles\nover time, as we walk by or around it, or directly manipulate it. This temporal persistence of\nobjects allows our visual system to associate one view of an object with another view of the same\nobject experienced in temporal proximity, as were proposed in slow-feature analysis (Wiskott &\nSejnowskil (2002) or memory trace models in computational neuroscience for\nlearning translation and rotation invariance in object recognition. Object persistence as a term in\npsychology sometimes refers to people\u2019s knowledge or belief on the continual existence of an object\neven when it is occluded and invisible from view. Here, we use it to more generally to denote the\ntemporal persistence of an object in our visual experience. We propose to incorporate the object\ncontinuity or persistence constraint in the training of DCNN, and investigate what new abstraction\nand capability such a network would develop as a consequence. We also evaluate the behaviors of the\nresulting network to see if they match the data on human perceptual similarity judgment of novel\n\nobjects in an earlier study (Tenenbaum et al.| 2011).\nSimilarity Judgment\n\n2 OF,\n\nQuery A=0.83 = B=0.41 \u20ac-0.31 = -D=0.23 E=0.14 F=0.07\nFigure 1: Framework for training and testing the network utilizing object persistence. For training (upper panel)\nwe first render multiple views for each object and arrange them into triplets containing a similar pair and a\ndissimilar pair as input to a Siamese network architecture. For testing (lower panel), when given a query image\nthe network computes a similarity score for each of the candidate images. The lower panel shows some example\nsimilarity scores given by our OPnet, where different views of the same object are considered the most similar\nfollowed by different objects in the same category, and finally those objects belonging to different categories of\nleast similarities with the query image.\n(Guo et al.|/2016) or for cross-view image and shape retrieval(Li et al.||2015). While these work:\nexplored training with multi-view images, they did not constrain the view points in a continuou:\n\nway and most importantly, the transferability to judgement of novel objects of novel classes wer\n\nnot studied. We evaluate the performance of the approach with [Li et al. (2015) in our tasks fo\ncomparison. That approach learned an embedding space of 3D shapes and used CNN for imag\n\nembedding for the purpose of image purification.\nWe take a standard CNN (AlexNet), that has already learned good feature representations for object\nclassification, and retrain the network in a Siamese triplet architecture with object persistence\nconstraints using multi-view images rendered from a set of 3D object models in ShapeNet."}, {"section_index": "3", "section_name": "2.1 OBJECT PERSISTENT NET (OPNET)", "section_text": "N\n_ A 2 + -\nmins || W la + y max{0, D(X;, X;\u00b0) \u2014 D(Xi, X;) + M}\n\ni=l\n_,_ f(%1)- f(X2)\nD(X1,X2) = 1 aT\nwhere A is the weight decay and W denotes the weights of the network. f(-) is the CNN representatior\noutput as a function of an input image, and / denotes the margin parameter. The margin is a thresholc\nTraining with Object Persistency\n\nSimilarity Judgment\n\n2 OF,\n\nQuery A=0.83\n\n41 C=0.31 \u00ab= D=0.23\u2014- E=0.14\n[o study the impact of object persistence constraint in the development of perceptual similarity\nudgment, OPnet utilizes a Siamese triplet architecture. This triplet architecture can be visualized as\nhree baseline CNN towers that share the same parameters (Figure[I). In implementation, it is just\nne single CNN applied to three images, two of which are considered more \u201csimilar\u201d than the third\ndifferent\u201d one. Conceptually, our OPnet tries to bring the feature representations of the two \u201csimilar\u201d\nmages together, and drive apart the representation corresponding to third \u201cdifferent\u201d image. The\nuchitecture and the initial weights of the baseline CNN is same as those of of AlexNet trained on\nimageNet ( ). To train our OPnet with triplet input (X;, x} , X;), we present two\nviews of the same 3D object to two base networks as (X;, X;*), and a view of a different object to\nhe third base network as X;. Object persistence means that given (X;, Xx}, X;,), we try to push\nhe representations for views of the same object (X;, X;\") to be close and make them away from the\n\n\u2018epresentation for the different object X; . We minimize the loss function with a hinge loss term:\nto decide whether the two views are considered similar or not. The higher the margin, the more we\nare forcing the network to develop a uniform representation for multiple views of the same object\nrelative to views of another object. D is the cosine distance function for a pair of features.\nThe different objects in principle could be from the same category or from different categories\nDuring training, we constrain the \u201cdifferent object\u201d to be another 3D object from the same category\nto push apart more forcefully the feature representations of objects from the same category, resulting\nin view-invariant object discrimination within the same category. We expect the result of this training\nto create a view-manifold for each individual object\u2014views within the same manifold are considerec\nto be \u201csimilar\u201d and closer together because they belong to the same object."}, {"section_index": "4", "section_name": "2.2 DISTANCE METRIC LEARNING", "section_text": "Our Siamese triplet approach transforms the view-manifold of the original baseline network, so\nthat different views of the same object are considered similar and become closer in the feature\nrepresentation space. Thus, it can be viewed as a form of distance metric learning (DML), which is\na set of methods that learn a transformation from the input space to a feature space. The Siamese\nnetwork has been a popular distance metric learning method, used in signature verification (Bromley\net {1993p, learning invariant mapping (Hadsell et al. 2006), face verification (Chopra et al.{[2005).\nunsupervised learning (Wang & Gupta| 2015) or image similarity ranking (Wang et al.\nthese works, the definition of similarity for DML comes from the semantic labeling like class label. In\nour work, the similarity is defined by the object persistence constraints, obtained during the rendering\nof 3D models and providing a continuous trajectory for each single object. Besides, the large variation\nof the 2D appearance induced by 3D rotation prevents our network from learning trivial global\ntemplates, but induces it to learn features that are more generalized and thus transferable more easily\nto novel objects.\nDCNNs, such as AlexNet, pre-trained on large dataset, have developed useful feature representation:\nthat can be fine-tuned for other specific tasks (Donahue et al.| [2014} (Qian et al.| /2015} [Karpathy\n. However, the pre-training of DCNN involves class labels as teaching signals. Durins\npretraining, the network learns to throw away much information to extract invariants for classificatior\nOn the other hand, DML approaches are able to develop feature representations that preserve mor\u00a2\nfine-grained features, as well as intra- and inter-class variations."}, {"section_index": "5", "section_name": "2.3 RENDERING MULTI-VIEW DATASETS FOR SIMILARITY JUDGEMENT TRAINING", "section_text": "To allow the network to learn features under the object persistence constraints and develop a similarity\njudgment that can transfer, we create one set of data for training and five sets of novel objects for\ntesting of the transferability. To focus our study on the network\u2019s ability to perceive 3D spatia\nrelations and features of individual objects, we grayscale our images during rendering to eliminate\nthe impact of color. For the same reason, we do not add any backgrounds.\nWe render multi-view images of individual objects from 7K 3D CAD models of objects in ShapeNe\n(Chang et al.|/2015). The 7K models belong to 55 categories, such as cars and chairs. For each mode\nwe render 12 different views by rotating the cameras along the equator from a 30\u00b0 elevation angl\nand taking photos of the object at 12 equally separated azimuthal angles (see Fig [}. We use th\n\nrendering pipeline in Blender, an open source 3D graphics software, with a spotlight that is stati\nrelative to the camera.\nFor training, we sample 200 object models from 29 categories of ShapeNet. 20 of these object models\nfrom each category are saved for cross validation. For testing, we make the assumptions that (1)\nviews of the same object are perceived to be more similar when compared to views of a different\nobject, and (2) views of objects in the same category are perceived to be more similar than views of\nobjects from different categories. These assumptions are consistent with findings in earlier studies on\nsimilarity judgment in human (Quiroga et al.| 2005} Erdogan et al.| 2014} Goldstone} 2013p. Since\nwe render images based on CAD models, we can control the variations to create a large dataset that\ncan approximate ground-truth data for similarity judgment for our experiments without resorting\nto large-scale human judgment evaluation. All the objects in the following five test sets are novel\nobjects in the sense that thev are not used in training.\nNovel instance: Created by rendering additional 20 novel objects from each of the 29 categorie\nused in training the OPnet. This is used to test the transfer of view-manifold learning to novel object\nof the same category. The task is not trivial due to the large intra-category variation existing in th\nShapeNet.\nNovel category: Created by rendering objects from 26 untrained categories. This is a more challeng-\ning test of the transfer of view-manifold learning to novel categories.\nSynthesized objects: Created by rendering a set of 3D models we synthesized. These are textureless\nobjects with completely novel shapes. The dataset consists of 5 categories, with 10 instances for each\ncategory. Within each category, the objects either have similar local parts, or have the same global\nconfiguration, based on human judgment. This is an even more challenging test, as these synthesized\nobjects are not in the ImageNet or ShapeNet.\nPokemon Created by 3D models of Pokemon dataset. Pokemons are cartoon characters with certain\nevolution relationships with each other, which provides an alternative measurement of similarity.\nThis test evaluates the transfer of learning to novel objects with different styles and more complicated\ntextures. We collected 438 CAD models of Pokemon from an online database. We divide these\nmodels into 251 categories according to their evolution relationships, with most of these categories\ncontaining only 2 to 4 objects. Pokemons of the same category look more similar on average due to\ntheir \u201cgenetic linkage\u201d.\nTenenbaum objects This test set contains novel objects from (2011), where th\n\nground truth is based on human similarity judgment.\nThe similarity score between a query image and a candidate image is computed as 1 minus the cosin\ndistance of the feature representations of the query and candidate pair, and higher score means highe\nsimilarity. Given a test set containing objects of multiple categories, we evaluate the OPnet via tw\nretrieval tasks: object instance retrieval and categorical retrieval. In the object instance retrieval task\nfor each image P containing object O of category C in the test set, the network is asked to rank al\nother images in C\u2019, such that images for O should have higher similarity score than images for othe\nobjects in C. In the categorical retrieval task, for each image P of category C, the network is aske\u00ab\nto rank all other images, such that images in category C\u2019 should have higher score than images not it\nC. Here we are indirectly utilizing the human perception information, as categories are defined b'\nhuman perception based on their similarity in shapes or functions."}, {"section_index": "6", "section_name": "2.5 IMPLEMENTATION DETAILS", "section_text": "We use Caffe (Jia et al.}|2014) for training the networks. The base network of the OPnet is modifie\nfrom the AlexNet architecture, where we drop the last fully connected layer (fc8) and replace th\nsoftmax loss with our triplet hinge loss. The network is initialized by weights pre-trained on ImageNe\nThe objective is optimized using mini-batch stochastic gradient descent (SGD) and we fine-tun\nthe network for all layers. For each pair of positive example (X, X*), we select two hard negativ\nexamples X~ which give the highest loss (similar in (Wang & Gupta| [2015)) and another tw\nrandomly from within the mini-batch. Starting with a learning rate of 0.01, we decrease it by a facto\nof 10 every 8K iterations and with a momentum of 0.9. We stop the training at 20K iterations. Weigh\ndecay is set to 0.0005. We set the margin parameter JV to 0.1 by cross validation.\nWe compare HoG feature representation and four deep learning networks: 1\nOPnet, 2) AlexNet pre-trained on ImageNet, 3) An AlexNet fine-tuned for classification on ShapeNe\ndata, denoted as \u201cAlexNetFT\u201d, 4) The joint embedding model by|Li et al. (2015). In AlexNetFT, we\nreplace the original fc8 layer with a fully connected layer with 29 output units and fine-tune the las'\ntwo fully connected layers (fc7, fe8) with cross-entropy loss. The AlexNetFT model is trained witt\nthe same data we used for training the OPnet. The joint embedding model was pre-trained on 670(\nshapes in the chair category of ShapeNet. For the first three deep models, we use the fc7 layer as the\nfeature representation and cosine distance to compute distance between feature representations. We\nPrecision-recall curve on shapenet novel instance\n\nPrecision-recall curve on shapenet chair category\n\n10\naa aa\n08\naa] aa]\n\u00b0s a a7 03 5 10 \u00b0s a a7 03 5 10\n\nPrecision-recall curve on shapenet novel category\n\nPrecision-recall curve on synthesized objects\n\nAlexNet+CosDis\n\u2018AlexNetFT\n\nHoG\n\nJoint Embedding\nOPnet\n\nTi)\n\nPrecision-recall curve on pokemon dataset\n\n19 19 19\n0a 0a 0a\n06) 08 08\n02| 02| 02|\n8 Ur oF 08 oe To 8 Ur oF 08 oe To 8 Ur oe oe\n\nTo\nision-recall curves for the object instance retrieval task on different datasets\nNovel instance | Novel category | Synthesized objects | Pokemon | Chair\n\nHoG 0.316 0.391 0.324 0.332 0.322\nAlexNetFT 0.437 0.503 0.356 0.287 0.478\nAlexNet+CosDis 0.529 0.623 0.517 0.607 0.686\nAlexNet+EucDis 0.524 0.617 0.514 0.591 0.677\nOPnet 0.856 0.855 0.574 0.697 0.938\nJoint-embedding 0.429 0.513 0.443 0.387 0.814\nTable 1: Mean Average Precision for the object instance retrieval task over all test sets.\nalso show results based on AlexNet feature representation both in terms of Eculidean distance anc\ncosine distance measures, denoted as AlexNet+EcuDis and AlexNet+CosDis. Comparison of featur\nrepresentations from different layers are shown in Appendix B. We show the results for the instance:\nretrieval task in Figure [2]and Table[I] The precision measure reflects the accuracy of the model\u2019\nsimilarity judgment, with the two assumptions given in section 2.3.\nOn similarity judgment of novel objects from both the trained and untrained categories, OPnet\nsignificantly outperforms AlexNet and AlexNetFT, with an increased Mean Average Precision of at\nleast 23%. The improvement is due to OPnet\u2019s gains in ability in discriminating different objects\ninside one category regardless of their viewpoints, while recognizing different views of the objects\nto be similar. For novel shapes in artificial synthesized objects and Pokemons, OPnet still shows an\nincreased MAP of at least 6% (or 15% decreased error rate for the Pokemon test). This shows that\nthe similarity judgment resulting from view manifold learning is valid not only for the trained objects\nor just to the objects in the same data set, but generalizable to other classes of objects. This suggests\nthe learned feature representations are more abstract and general, allowing the transfer of the learning\nto substantially different datasets and novel objects, to a degree that is not well known or well studied\nin computer vision.\nWe compare OPnet with the joint embedding approach on the chair category of ShapeNet, shown in\nFigure[2] Both networks are trained with the chair category and are tested on novel chairs. OPnet\noutperforms the joint embedding approach by a large margin, showing that a better instance level\nPracsion-ecall curve on shapenet novel instance Pracsion-ecall curve on shapenet novel category Precision-ecal curve on synthesized objects\n\n02| [ee Wontar Coedie [o> Hosmer Cosdie [o> Hosmer Cosdie\n+ Nextt + Nextt + Nextt\nJoe Hos Joe Hos Joe Hos\n+ oPnet + oPnet + oPnet\n\n8 7 T ve 8 7 T ve 8 7 T ve\nFigure 3: The precision-recall curves for the category level retrieval task. The three figures show the network\u2019s\nperformance on the ShapeNet dataset with novel instance, novel category and synthesized objects respectively.\ndiscrimination is achieved using object persistence training, compared to using known shapes a\nanchor points for image embedding. Furthermore, because the joint embedding approach would nee:\nto be trained for each specific category, it does not perform well on novel categories.\nWhen we fine-tuned AlexNet for classification of the 29 trained categories, the resulting AlexNetFT\u2019s\nfeature representation actually performs the worst, compared to OPnet and the original AlexNet, on\nthe instance similarity judgment or retrieval tasks. When a network is trained to perform classification.\nit learns to ignore subtle differences among objects in the same category. The fewer categories a\nnetwork is trained on, the more the instance level similarity judgment will be compromised. This loss\n\nof the generality of its feature representation compromises its transferability to novel objects in other\nclasses.\nWe notice that the performance gain for the OPnet is most significant in the ShapeNet dataset and the\ngap becomes small for the synthesized and Pokemon dataset. This shows OPnet\u2019s certain overfitting\nto the bias in ShapeNet, as the synthesized object dataset contains textureless objects and Pokemon\ndataset contains mainly human-like characters that are not in ShapeNet.\nCategorical retrieval provides another measure of the network\u2019s performance in similarity judgment\nIn this test, we randomly sample 20 categories each from the novel instance test and the novel category\ntest, with 20 object instances drawn from each category. For the synthesized object test set, we test\nall 5 categories and each with 10 instances. For each instance, a single random view is provided\nThe results are shown in Figure[3] Despite the fact that AlexNet knows more about the semantic\nfeatures of each category, our OPnet still achieves comparable results. OPnet here shows an improved\nability in similarity judgment at the categorical level. On our artificially synthesized object dataset\nwhere all three networks have no prior experience, OPnet performs better than AlexNet. AlexNetFT\nperforms extremely well on trained categories likely because it is overfitted to the limited trained\nobjects, even though it uses the same amount of data. This overfitting problem shows that training\nwith only class labels might not preserve the essential information to develop transferable general\nfeature and abstract feature representation. especially with limited training dataset."}, {"section_index": "7", "section_name": "3.1 CORRELATION WITH HUMAN PERCEPTION", "section_text": "Using the novel objects from{Tenenbaum et al.|(2011), we are able to compare our networks with\nhuman similarity perception. We collect 41 images from the paper, one image per object. A pairwise\nsimilarity matrix is calculated based on the cosine distance of their feature representations. We\ncan then perform hierarchical agglomerative clustering to obtain a tree structure, using the Nearest\nPoint Algorithm. That is, for all points 2 in cluster u and points j in cluster v, the distance of the\ntwo clusters are calculated by dist(u, v) = min(D(ufi], v[j])), where D(-) is the cosine distance\nfunction. We merge two clusters with the shortest distance successively to construct the tree. The\ntree based on human perception is constructed by giving human subjects all the images and asking\nthem to merge two clusters that are most similar each time, similar to the hierarchical agglomerative\nclustering algorithm. Results are shown in Figure/4]\nIn order to quantitatively measure the similarity between the trees output by neural networks and the\none based on human perception, we calculate the Cophenetic distances on the tree for each pair of!\nFigure 4: Hierarchical clustering of the alien objects, based on (a) human perceptions, (b)A lexNet features\nand (c) OPnet features. The dendrograms illustrate how each cluster is composed by drawing a U-shaped link\nbetween a cluster and its children. The height of each U-link denotes the distance between its children clusters\nwhen they are merged.\nobject. For object i and j, the Cophenetic distances t;,; are defined as t;,; = dist(u,v),i \u20ac u,j \u20ac t\nwhere u,v are clusters connected by U-link. Finally, we can evaluate the similarity of the two trees by\ncalculating the Spearman\u2019s rank correlation coefficient. In the experiment, the Spearman correlatiot\nis 0.460 between AlexNet and the human perception and 0.659 between OPnet and the huma1\nperception, meaning that our OPnet, trained with object persistence constraints on a relatively smal\nset of objects, automatically yielded a higher match to the human perceptual similarity data. Thi\nfinding provides some support to our conjecture that object persistence might play an important rol\nin shaping human similarity judgment.\ned | | ed |\nJis-77)\\.-77\nfae-// | fecoP\nWV ed LL \\ ee\nFigure 5: Distance measures for 5 cabinet objects. Lighter pixels mean larger distance. On the left is the object:\neach with 12 views, whose similarity distance between each other we are interested in. In the middle and the\nright is the cosine distance of the ouptut features of OPnet and AlexNet respectively. The element on the i*\u201d rov\nand the j\u201d column stands for the cosine distance between the i*\u201d and j*\u201d image. The i\u2019\u201d image is renderec\nfrom [i/12]*\" object and (i mod 12)\u201d view.\nORGKEPIL IST LILIAN Cee E Re 1) ODOC COOOE ILE BO\n(a) Grouping by Human Perception\n\nELLER IAS TPOSSPSOCSDOOIOL FET SIL LI seen iee\n(b) Grouping by AlexNet Features\n\nDERE CENEL ET OASODS* CHSPOOVEIOLVEVRYLTLESIIIL\n(c) Grouping by OPnet Features\nWe study the feature representations in these networks and their transformation induced by the object\npersistence constraints to understand how the changes in similarity judgment performance come\nabout. As our network uses cosine distance in the feature space as similarity measure, we study how\nthis measure changes in the view-manifold of the same object and between views of different objects.\nWe first visualize the pairwise similarity distance matrix of AlexNet and OPnet in Figure |5} We\nrandomly choose 5 objects from the cabinet category for illustration. Each object has 12 views\nthat the network has never seen before. Images are arranged first by different object instances (in\ncolumns) then by views (in rows). Many properties of the view manifolds are revealed. First, for\nthe matrix of OPnet, we can see clearly five dark blocks formed in the diagonal, each standing for\nthe strong similarity (small distance) among the different views of the same cabinet. The dark block\nmeans that OPnet is associating different views of the same object together, reducing intra-object\ndistance relative to inter-object distance. In this way, the similarity judgment of the OPnet becomes\nmore viewpoint independent. On the other hand, the similarity matrix of AlexNet shows a variety\nof patterns across all objects within the same category. A closer look at these patterns suggests that\nAlexNet first forms groups by certain views (e.g. side-views), and then by objects, resulting in a\nmore viewpoint dependent similarity measure that is poor in discriminating objects within a category\nSecond, even though OPnet groups different views together, the view-manifold has not degenerated\ninto a single point. Certain patterns can be seen inside each dark block of OPnet\u2019s matrix, forming\na hierarchical structure: different views of the same object are more similar to each other than to\nanother object and some rotations in angle are considered more similar than others. To illustrate how\nthe view manifolds have contracted but not completely degenerated, we randomly sample objects\nfrom the novel instance test set and use TSNE to plot them in 2D, as shown\nin Figure|6] We can see clearly that different views of the same object are considered more similar in\nthe feature space, and objects form tight and distinct clusters. We borrow a measurement from Linea\nAlexNet CategoryO 300,__AlexNet Category1 100,_AlexNet Category 200\u2014AlexNet Category3\n| q aso .\n100] 100\n100] =|\n50\n50] d A\n. de\nd 3 | n300 | 4\n| -100]\n-so| \u00b0 50) 150]\n300] . ~20d\n~WYso=T00 500 S050 50 ~ 4\u00b0 5o=ToN=Sa\u20149 SOOO TSo-F00 ~ 1A So-ToN=Se\u20149 SO Too TSO Fo0 | ~ 794\nnet Category]. Pnet Category2\n. 100,___OPnet Category: 100,___OPnet Category: a\n50] | 100\n50)\nq sol\n\u00b0\n50 o (|\n-50]\n=100] \u00b0 -s0\n-s0\n=159} | 109]\n200452 - . aA, 1504. > = TA LOO 1 ee\n100} 209}\n100) |\nsq\nso | q\nWe\nq 100 |\n-200| on\n-so| \u00b0 74 =159\n. 300 . -204\nMed sy reese Sto \u2014ES\u2014TaO=S OOOO IS -Bt0 1A s\u2014To=so a SOOO ST ~ 28\nPnet Category net Cate ryl Pnet Cate i\nan OPnet Category fon OPnet Category\u2019 feet OPnet Category: 150,\nee\n+0] sifu so rod\nFy\na so\nQ\n-sq q q\n-s4\ntod . sq\n-s4\n\u2014 o| 20d _\n20995 \u2014=33\u20149\u2014s9\u2014tho -15400\u2014=s0 Tho SooTsTO0=so\u2014-7\u2014 Soo Ts0 | ~1SSoo=Ts0TOT=So\u2014T So T9o Tho\nFigure 6: TSNE visualization of the features produced by AlexNet and OPnet, on four categories. Each poin\nrepresents a view of an object. Different colors represent different objects.\nDiscriminant Analysis (LDA) to evaluate how tightly different views of the same object are clustered\ntogether, relative to the distance among different objects within the same category. Formally, let 5\nbe the set of all the objects inside one category and c be the set of all views for one object, z be the\ncenter of all image features, and ju, be the center for the object c. We then calculate the score fot\ncategory 7 using the following equation:\nWe then average over all the categories to get a score for each network. The higher the score is, the\nlarger the inter-object distance is compared to intra object distance and the more closely different\nviews of the same object are grouped together. In the experiment with the novel instance test set.\nOPnet\u2019s score is 0.535 whereas AlexNet\u2019s is 0.328, showing the different views of the same object\nare more similar than that between different objects due to the object persistence constraint.\nIn this work, we fine-tune AlexNet with object persistence constraints in the framework of distance\nmetric learning with a Siamese triplet. This fine-tuning modifies the view-manifold of the object\n\u2014 |?\n\nOinter_instance\n\ncore;\n\nTintra_instance si Sie Hel|\"\nST Lo el\n\nce; x\u20ace\nrepresentation, bringing closer together the representations of an object in different views, drivin,\napart representations of different objects in the same category, resulting in better intra-categorica\nobject recognition, without compromising inter-categorical discrimination. We investigated whethe\nthis view-manifold learning results in an improvement in the network\u2019s ability to recognize th\nsimilarity of novel objects that have never been seen before by performing instance and categorica\nimage retrieval on artificial novel objects or novel object classes, including a set tested in huma:\nsimilarity judgment. Interestingly, we find that AlexNet, with its rich feature representations, alread\nperform similarity judgement significantly above chance, in the sense that different views of the sam\nobject are considered more similar to the views of another object in the same category, or objects i\nthe same category are considered to be more similar than objects in different categories. Fine-tunin,\nwith the object persistence constraint significantly improves this \u2019similarity judgement\u201d among.\nvariety of novel objects, suggesting the view manifold learning in the OPnet is accompanied b\nfeature embeddings with more general and abstract attributes that are transferable, likely at the leve\nof local object parts.\nFrom a technical point of view, our OPnet performs better than earlier approaches 2015}\nin instance and categorical retrieval of novel objects. We have tested our approach with real image\ndatabase and found it only yields a slight improvement over AlexNet\nThat database contains 1000 objects with different views but without categorical labels. OPnet\u2019s\nsuperiority over AlexNet lies in its better discrimination of objects within the same category. When\nobjects are not organized in categories, i.e. when each object is essentially treated as a category.\nOPnet loses its advantages. In addition, there are more complex variations such as lighting and scale\nin real scene environments that our current OPnet has not considered. We plan to develop this model\nto discount additional nuisance variables and to develop or find database to explore the transferability\nof its view-manifold learning in more general settings.\nOur work was motivated by our hypothesis that object persistence/continuity constraint in our visual\nexperience might play a role in the development of neural representations that shape our similarity\njudgement of objects that we have not seen before. The fact that fine-tuning AlexNet with this\nadditional constraint automatically yields a new view-manifold that match human similarity judgment\ndata better than AlexNet lends some support to our hypothesis. However, more extensive testing with\nhuman perception ground-truth will be needed to fully confirm our hypothesis."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "Xingyu Lin and Hao Wang were supported by the PKU-CMU summer internship program. This work\nis supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of\nInterior/ Interior Business Center (DoI/IBC) contract number D16PC00007. The U.S. Government\nis authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any\ncopyright annotation thereon. Disclaimer: The views and conclusions contained herein are those\nof the authors and should not be interpreted as necessarily representing the official policies or\nendorsements. either expressed or implied. of LARPA. Dol/IBC. or the U.S. Government.\nWe thank Kalina Ko for helping us to construct part of the synthesized object database."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Yay S. B. Goldstone, R. L. Similarity. The Encyclopedia of Mind., pp. 696-699, 2013\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with\ndeep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and\nJane Bromley, James W Bentz, L\u00e9on Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard\n\nSackinger, and Roopak Shah. Signature verification using a siamese time delay neural network.\nInternational Taynrnal nf Pattern Rernonitiann and Artifriqgl Intellicenre TINA)FEO_B6R2 1002\nand Pattern Recognition (CVPR\u201905), volume 1, pp. 539-546. IEEE, 2005.\n\nNavneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In 2005 IEEE\nComputer Society Conference on Computer Vision and Pattern Recognition (CVPR\u201905), volume 1,\npp. 886-893. IEEE, 2005.\n\nJ. Deng, W. Dong, R. Socher, L. J. Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical\nimage database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference\non, pp. 248-255, June 2009. doi: 10.1109/CVPR.2009.5206848.\n\nCarl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by\ncontext prediction. In Proceedings of the IEEE International Conference on Computer Vision, pp.\n1422-1430, 2015.\n\nJeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor\nDarrell. Decaf: A deep convolutional activation feature for generic visual recognition. In JCML,\npp. 647-655, 2014.\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio\nGuadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In\nProceedings of the 22nd ACM international conference on Multimedia, pp. 675-678. ACM, 2014.\nAndrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei.\nLarge-scale video classification with convolutional neural networks. 2014.\nYangyan Li, Hao Su, Charles Ruizhongtai Qi, Noa Fish, Daniel Cohen-Or, and Leonidas J. Guibas.\nJoint embeddings of shapes and images via cnn image purification. ACM Trans. Graph., 2015.\nFrancisco Massa, Bryan Russell, and Mathieu Aubry. Deep exemplar 2d-3d detection by adaptins\nfrom real to rendered views. arXiv preprint arXiv: 1512.02497, 2015.\nJoshua B Tenenbaum, Charles Kemp, Thomas L Griffiths, and Noah D Goodman. How to grow <\nmind: Statistics. structure. and abstraction. science. 331(6022):1279\u2014-1285. 2011.\nJiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen.\nand Ying Wu. Learning fine-grained image similarity with deep ranking. In Proceedings of the\nTEER Conference on Computer Vision and Pattern Recoenition. po. 1386\u20141393. 2014.\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine\nLearnine Research. 9(Nov):2579\u20142605. 2008.\nGavin Perry, Edmund T Rolls, and Simon M Stringer. Spatial vs temporal continuity in view invariant\nCenal albient earnenitian learning Uletan Pocpnerh AGI92)\\:200A ANNK Niavamhar INE\nAPPENDIX A EXAMPLES OF SOME TOP RANKING RESULTS"}, {"section_index": "10", "section_name": "APPENDIX B INSTANCE RETRIEVAL RESULTS USING FEATURES FROM\nDIFFERENT LAYERS", "section_text": "As shown in many literatures (Massa et al} 2015}{Aubry & Russell] 2015), features from differen\n\nlayers sometimes perform differently for a given task. For the instance retrieval task on the nove\ninstance dataset of the ShapeNet, we compare OPnet and AlexNet using features from different layer:\nas shown in Figure[B] The accuracy of AlexNet is pretty flat up to conv3, and then keeps increasins\nuntil layer fe8 where the feature becomes categorical probability and not appropriate for instanc:\n\nlevel discrimination. On the other hand, the object persistence training gives a significant increase i1\naccuracy in fully connected layers.\n1.0, Mean average precision using features from different layers\n\n0.8\n<\nS\n20.6\nEs\no\nS\ng\nEy\n60.4\n<\n5\no\n=\n0.2;\n<\u2014 AlexNet+CosDis\n-\u2014 OPnet\ner ee oa M Aor SSSA VED\nSM S a\u2019 SY VPM YP VO We oO\neS S PP PS SO HS eee\nFEE FSS EEESESESSF & 8\n\nLayers\nFigure 8: Instance Retrieval Results Using Features From Different Layers\nShapeNet\nNovel\nCategory\n\nSynthesized\nObjects\n\nPokemon\n\nQuery OPnet Retrieval Results AlexNet Retrieval Results\nFigure 7: Examples of top instance retrieval results for AlexNet and OPnet. Images that are different views of\nthe same object(which are considered more similar) are marked with red solid rectangle while views of other\nobjects are marked with gray dashed rectangle. Obviously from the gun example we can see how the retrieval\nresults for AlexNet are highly view-dependent."}]
HJDBUF5le
[{"section_index": "0", "section_name": "TOWARDS A NEURAL STATISTICIAN", "section_text": "Harrison Edwards\nSchool of Informatics\nUniversity of Edinburgh\nEdinburgh, UK\n1.L.Edwards@sms.ed.ac.uk\nAn efficient learner is one who reuses what they already know to tackle a new\nproblem. For a machine learner, this means understanding the similarities amongst\nlatasets. In order to do this, one must take seriously the idea of working with\natasets, rather than datapoints, as the key objects to model. Towards this goal,\nwe demonstrate an extension of a variational autoencoder that can learn a method\nfor computing representations, or statistics, of datasets in an unsupervised fash-\nion. The network is trained to produce statistics that encapsulate a generative\nmodel for each dataset. Hence the network enables efficient learning from new\nlatasets for both unsupervised and supervised tasks. We show that we are able\nto learn statistics that can be used for: clustering datasets, transferring generative\nmodels to new datasets, selecting representative samples of datasets and classify-\ning previously unseen classes. We refer to our model as a neural statistician, and\nby this we mean a neural network that can learn to compute summary statistics of\nlatasets without supervision."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The machine learning community is well-practised at learning representations of data-points and se-\nquences. A middle-ground between these two is representing, or summarizing, datasets - unorderec\ncollections of vectors, such as photos of a particular person, recordings of a given speaker or a doc-\nument as a bag-of-words. Where these sets take the form of i.i.d samples from some distribution\nsuch summaries are called statistics. We explore the idea of using neural networks to learn statistics\nand we refer to our approach as a neural statistician.\nWe are given datasets D; for i \u20ac Z. Each dataset D; = {x1,..., xp, } consists of a number of i.i.d\nsamples from an associated distribution p; over R\u201d. The task can be split into learning and inference\ncomponents. The learning component is to produce a generative model 5; for each dataset D;. We\nassume there is a common underlying generative process p such that p; = p(-!e;) for ce; \u20ac R! drawr\nAmos Storkey\n\nSchool of Informatics\nUniversity of Edinburgh\nEdinburgh, UK\n\nRN CtarkayvQad acin"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The key result of our approach is a statistic network that takes as input a set of vectors and outputs\na vector of summary statistics specifying a generative model of that set - a mean and variance\nspecifying a Gaussian distribution in a latent space we term the context. The advantages of our\napproach are that it is:\nUnsupervised: It provides principled and unsupervised way to learn summary statistics as\nthe output of a variational encoder of a generative model.\n\nData efficient: If one has a large number of small but related datasets, modelling the\ndatasets jointly enables us to gain statistical strength.\n\nParameter Efficient: By using summary statistics instead of say categorical labellings of\neach dataset, we decouple the number of parameters of the model from the number of\ndatasets.\n\nCapable of few-shot learning: If the datasets correspond to examples from different classes,\nclass embeddings (summary statistics associated with examples from a class), allow us to\nhandle new classes at test time.\nfrom p(c). We refer to c as the context. The inference component is to give an approximate posteric\nover the context g(c|D) for a given dataset produced by a statistic network.\nIn order to exploit the assumption of a hierarchical generative process over datasets we will use a\n\u201c\u2018parameter-transfer approach\u2019 (see \"20TD) to extend the variational autoencoder model\n\nof(Kingma & Welling] (2013).\nbo) BV\nFigure 1: Left: basic hierarchical model, where the plate encodes the fact that the context variable\nc is shared across each item in a given dataset. Center: full neural statistician model with three\nlatent layers z1, 22, 3. Each collection of incoming edges to a node is implemented as a neural\nnetwork, the input of which is the concatenation of the edges\u2019 sources, the output of which is a\nparameterization of a distribution over the random variable represented by that node. Right: The\nstatistic network, which combines the data via an exchangeable statistic layer."}, {"section_index": "3", "section_name": "3.1 VARIATIONAL AUTOENCODER", "section_text": "p(x) = / p(x|z;)p(2) dz,\nThe generative parameters 0 are learned by introducing a recognition network (also called an en.\ncoder) g(z|x;) with parameters \u00a2. The recognition network gives an approximat\n\ne posterior ove!\nthe latent variables that can then be used to give the standard variational lower bound (Saul & Jordan\n1996) on the single-datum log-likelihood. Le. log P(z|/@) > \u00a3.., where\nLikewise the full-data log likelihood is lower bounded by the sum of the \u00a3,. terms over the whole\ndataset. We can then optimize this lower bound with respect to \u00a2 and 6 using the reparameterization\n(2014)\n\ntrick introduced by [Kingma & Welling] (2013) and to get a Monte-Carlo\n\nestimate of the gradient."}, {"section_index": "4", "section_name": "3.2 BASIC MODEL", "section_text": "o(D) = | v(c) I [ v6elzOpele) | de.\nweED\nThe prior p(c) is chosen to be a spherical Gaussian with zero mean and unit variance. The condi.\ntional p(z|c; @) is Gaussian with diagonal covariance, where all the mean and variance parameter:\ndepend on c through a neural network. Similarly the observation model p(x|z; 6) will be a simple\nlikelihood function appropriate to the data modality with dependence on z parameterized by a neura\nnetwork. For example, with real valued data, a diagonal Gaussian likelihood could be used where\nthe mean and log variance of x are created from z via a neural network.\nThe variational autoencoder is a latent variable model p(x|z;@) (often called the decoder) with\nparameters 6. For each observed x, a corresponding latent variable z is drawn from p(z) so that\nLy = Eq(z\\x,\u00a2) log p(x|z;)| \u2014 Dex (a(z|x; 6) ||p(2)) -\nWe extend the variational autoencoder to the model depicted on the left in Figure[I] This includes a\nlatent variable c, the context, that varies between different datasets but is constant, a priori, for items\nwithin the same dataset. Now, the likelihood of the parameters @ for one single particular dataset D\nis given by\nLp = Eq(e\\p:e) | >, Ea(zlee:e) log p(2|z;9)] \u2014 Dict (q(zle, 2; @)|Ip(zle; 9))\nred\n\u2014 Dez (q(c|D; d)||p(c))\nThe full-data variational bound is given by summing the variational bound for each dataset in ou\ncollection of datasets. It is by learning the difference of the within-dataset and between-dataset\ndistributions that we are able to discover an appropriate statistic network."}, {"section_index": "5", "section_name": "3.3. FULL MODEL", "section_text": "L-1\np(D) = [vo Il [vee z1:139)p(zz|c3 8) Il P(zi|2i41,69) dzizz de\ni=l\n\n\u00abED\nThe full approximate posterior factorizes analogously as\nL-1\n\na(\u00a2, 21:1|D; $) = a(elD; 6) T] aerle,66) T] a(ailzins. 2,69),\nxED i=l\nFor convenience we give the variational lower bound as sum of a three parts, a reconstruction term\nRp, acontext divergence Cp and a latent divergence Lp:\nAD \u2014\n\nRp=\nCp=\n\nLp =\n\nAD TYDT#HD Witt\n\nEq(olD:8) \u00a5_, Eq(z:.n|eesd) 108 P(a| 21:1, 65 9)\n\u00abxED\n\nDx (a(elD; &)||p(c))\n\nEyg(ezr-1|D:8) |)\u00bb Dex (a(2x le, 23 6)||p(2z1\u00a2;4))\n2eD\nL-1\n+32 Dec (q(zilzi41,\u00a2, 23 0) lip(@il 241.6 9)) |\n\ni=l\nThe skip-connections p(z;|2i41, \u00a2; 0) and q(z;|z;41, 2; @) allow the context to specify a more precis\u00ab\ndistribution for each latent variable by explaining-away more generic aspects of the dataset at eacl\nstochastic layer. This architecture was inspired by recent work on probabilistic ladder network:\nin|Kaae Sgnderby et al.|(2016). Complementing these are the skip-connections from each laten\nvariable to the observation p(z|z1.1,\u00a2; 9), the intuition here is that each stochastic layer can focu:\non representing a certain level of abstraction, since its information does not need to be copied intc\nthe next layer. a similar approach was used in/Maalge et al.|(2016).\nWe use approximate inference networks q(z|xx, c; $), q(c|D; 4), with parameters collected into \u00a2,\nlo once again enable the calculation and optimization of a variational lower bound on the log-\nlikelihood. The single dataset log likelihood lower bound is given by\n) So Egeejee:a) Log p(#|2; 9)] \u2014 Dic (a(zle, 23 @) llp(zle: 8)\nwed\nAs with the generative distributions, the likelihood forms for g(z|, c; b) and q(c|D; \u00a2) are diagonal\nGaussian distributions, where all the mean and log variance parameters in each distribution are pro-\nduced by a neural network taking the conditioning variables as inputs. Note that g(c|D; \u00a2) acc\n\nas input a dataset D and we refer to this as the statistic network. We describe this in Subsection]\n\n=\nThe basic model works well for modelling simple datasets, but struggles when the datasets have\ncomplex internal structure. To increase the sophistication of the model we use multiple stochastic\nlayers z1,..., 2, and introduce skip-connections for both the inference and generative networks.\nThe generative model is shown graphically in Figure[I]in the center. The probability of a dataset D\nis then given by\nwhere the p(z;|zi+1,\u00a2,) are again Gaussian distributions where the mean and log variance are\ngiven as the output of neural networks. The generative process for the full model is described in\nAlgorithm|T]\nOnce again, note that we are maximizing the lower bound to the log likelihood over many datasets\nD: we want to maximize the expectation of Lp over all datasets. We do this optimization using\nstochastic gradient descent. In contrast to a variational autoencoder where a minibatch would consist\nof a subsample of datapoints from the dataset, we use minibatches consisting of a subsample of\ndatasets - tensors of shape (batch size, samole size, number of features)."}, {"section_index": "6", "section_name": "3.4 STATISTIC NETWORK", "section_text": "We use a feedforward neural network consisting of three main elements:\nWe note that the humble sample mean already gives the statistic network a great deal of represen-\ntational power due to the fact that the instance encoder can learn a representation where averaging\nmakes sense. For example since the instance encoder can approximate a polynomial on a compact\ndomain, and so can the post-pooling network, a statistic network can approximate any moment of <\ndistribution."}, {"section_index": "7", "section_name": "4 RELATED WORK", "section_text": "Due to the general nature of the problem considered, our work touches on many different topics\nwhich we now attempt to summarize.\nTopic models and graphical models The form of the graphical model in Figure[T]on the left i:\nequivalent to that of a standard topic model. In contrast to traditional topic models we do not us\u00a2\ndiscrete latent variables, or restrict to discrete data. Work such as that by Ranganath tl] (2014) ha:\nextended topic models in various directions, but importantly we use flexible conditional distribu.\ntions and dependency structures parameterized by deep neural networks. Recent work has explorec\nneural networks for document models (see eg. Miao etal but has been limited to modelling\ndatapoints with little internal structure. Along related lines are \u2018structured variational autoencoders\u2019\n\n(see|Johnson et al.|[2016), where they treat the general problem of integrating graphical models witt\nvariational autoencoders.\nTransfer learning There is a considerable literature on transfer learning, for a survey see|Pan &\n\n(2010). There they discuss \u2018parameter-transfer\u2019 approaches whereby parameters or priors are\nshared across datasets, and our work fits into that paradigm. For examples see |Lawrence & Platt\n(2004) where share they priors between Gaussian processes, and|Evgeniou & Pontil] (2004) where\n\nthey take an SVM-like approach to share kernels.\nIn addition to the standard inference networks we require a statistic network q(c|D; \u00a2) to give an\napproximate posterior over the context c given a dataset D = {x1,..., 7%}. This inference network\nmust capture the exchangeability of the data in D.\ne An instance encoder & that takes each individual datapoint x; to a vector e; = E(x;).\n\ne An exchangeable instance pooling layer that collapses the matrix (e1,...,e,) to a single\npre-statistic vector v. Examples include elementwise means, sums, products, geometric\nmeans and maximum. We use the sample mean for all experiments.\n\ne A final nost-nooline network that takes 7) to a narameterization of a diagonal Gaussian.\nOne-shot Learning Learning quickly from small amounts of data is a topic of great interest.\n(2015) use Bayesian program induction for one-shot generation and classification, and {Koch|\n(2015) train a Siamese (Chop ea) 205) convolutional network for one-shot image classifi-\ncation. We note the relation to the recent work in which the authors use a\nconditional recurrent variational autoencoder capable of one-shot generalization by taking as extra\ninput a conditioning data point. The important differences here are that we jointly model datasets\nand datapoints and consider datasets of any size. Recent approaches to one-shot classification are\nmatching networks (Vinyals et al.||2016b) (which was concurrent with the initial preprint of this\nwork), and related previous work (Santoro et al.||2016). The former can be considered a kind of\ndifferentiable nearest neighbour classifier, and the latter augments their network with memory to\nstore information about the classification problem. Both are trained end-to-end for the classification\n\nproblem, whereas the present work is a general approach to learning representations of datasets.\nProbably the closest previous work is by{Salakhutdinov et al.|(2012) where the authors learn a topic\nMultiple-Instance Learning There is previous work on classifying sets in multiple-instance\nlearning, for a useful survey see |Cheplygina et al.| . Typical approaches involve adapting\nkernel based methods such as support measure machines (Muandet et al.|[2012), support distribu-\ntion machines and multiple-instance-kernels (Gartner et al. . We do not\nconsider applications to multiple-instance learning type problems here, but it may be fruitful to do\nso in the future.\nSet2Seq In very related work, a) explore architectures for mapping sets to\nsequences. There they use an LSTM to repeatedly compute weighted-averages of the datapoints and\nuse this to tackle problems such as sorting a list of numbers. The main difference between theit\nwork and ours is that they primarily consider supervised problems, whereas we present a general\nunsupervised method for learning representations of sets of i.i.d instances. In future work we may\nalso explore recurrently computing statistics.\nABC There has also been work on learning summary statistics for Approximate Bayesian Com-\nputation by either learning to predict the parameters generating a sample as a supervised problem, or\nby using kernel embeddings as infinite dimensional summary statistics. See the work by|Fukumizu\nfor an example of kernel-based approaches. More recently used deep\nneural networks to predict the parameters generating the data. The crucial differences are that their\n\nproblem is supervised, they do not leverage any exchangeability properties the data may have, nor\ncan it deal with varying sample sizes.\nGiven an input set x1, . we can use the statistic network to calculate an approximate posteriot\nover contexts q(c|x1,...,2%;@). Under the generative model, each context c specifies a conditiona\nmodel p(x|c;@). To get samples from the model corresponding to the most likely posterior value\n\nof c, we set c to the mean of the approximate posterior and then sample directly from the condi.\ntional distributions. This is described in Algorithm [2] We use this process in our experiments tc\nshow samples. In all experiments, we use the Adam optimization algorithm 2014)\nto optimize the parameters of the generative models and variational approximations. Batch normal.\nization 2015) is implemented for convolutional layers and we always use a batct\nsize of 16. We primarily use the Theano (Theano Development Team] |2016) framework with the\n\nLasagne (Dieleman et al.||2015) library, but the final experiments with face data were done using\nTensorflow (Abadi et al.]/2015). In all cases experiments were terminated after a given number o:\n\nepochs when training appeared to have sufficiently converged (300 epochs for omniglot, youtube\nand spatial MNIST examples, and 50 epochs for the synthetic experiment)."}, {"section_index": "8", "section_name": "5.1 SIMPLE 1-D DISTRIBUTIONS", "section_text": "In our first experiment we wanted to know if the neural statistician will learn to cluster synthetic\n1-D datasets by distribution family. We generated a collection of synthetic 1-D datasets each con-\ntaining 200 samples. Datasets consist of samples from either an Exponential, Gaussian, Uniform o1\nLaplacian distribution with equal probability. Means and variances are sampled from U[{\u20141, 1] anc\nU[0.5, 2] respectively. The training data contains 10K sets.\nThe architecture for this experiment contains a single stochastic layer with 32 units for z and 2\nunits for c, . The model p(|z, c; 4) and variational approximation q(z|x,c;) are each a diagonal\nGaussian distribution with all mean and log variance parameters given by a network composed of\nthree dense layers with ReLU activations and 128 units. The statistic network determining the mean\nand log variance parameters of posterior over context variables is composed of three dense layers\nbefore and after pooling, each with 128 units with Rectified Linear Unit (ReLU) activations.\nFigure shows 3-D scatter plots of the summary statistics learned. Notice that the different familie:\nof distribution cluster. It is interesting to observe that the Exponential cluster is differently orientatec\nto the others, perhaps reflecting the fact that it is the only non-symmetric distribution. We also se\u00a2\nthat between the Gaussian and Laplacian clusters there is an area of ambiguity which is as on\nmodel over the activations of a DBM for one-shot learning. Compared with their work we use mod-\nern architectures and easier to train VAEs, in particular we have fast and amortized feedforward\ninference for test (and training) datasets, avoiding the need for MCM\u00ab'\nFigure 2: Three different views of the same data. Each point is the mean of the approximate posterior\nover the context q(c|D; \u00a2) where c \u20ac R\u00b0. Each point is a summary statistic for a single dataset with\n200 samples. Top plot shows points colored by distribution family, left plot colored by the mean and\nright plot colored by the variance. The plots have been rotated to illustrative angles.\nmight expect. We also see that within each cluster the mean and variance are mapped to orthogonal\ndirections."}, {"section_index": "9", "section_name": "5.2 SPATIAL MNIST", "section_text": "Building on the previous experiments we investigate 2-D datasets\nthat have complex structure, but the datapoints contain little in-\nformation by themselves, making it a good test of the statistic\nnetwork. We created a dataset called spatial MNIST. In spatial\nMNIST each image from MNIST (LeCun et al.|/1998) is turned\ninto a dataset by interpreting the normalized pixel intensities as\na probability density and sampling coordinate values. An ex-\nample is shown in Figure [3] This creates two-dimensional spa-\ntial datasets. We used a sample size of 50. Note that since the\npixel coordinates are discrete, it is necessary to dequantize them\nby adding uniform noise u ~ U[0,1] to the coordinates if one\nmodels them as real numbers, else you can get arbitrarily high\n\ndensities (see|Theis et al.|( for a discussion of this point).\nThe generative architecture for this experiment contains 3 stochastic z layers, each with 2 units\nand a single c layer with 64 units. The means and log variances of the Gaussian likelihood fo:\np(x|z1:3,\u00a2; 9), and each subnetwork for z in both the encoder and decoder contained 3 dense layer:\nwith 256 ReLU units each. The statistic network also contained 3 dense layers pre-pooling and :\ndense layers post pooling with 256 ReLU units.\nIn addition to being able to sample from the model conditioned on a set of inputs, we can alsc\nsummarize a dataset by choosing a subset 5 C D to minimise the KL divergence of \u00a2(C|D; \u00a2) from\nq(C|S;). We do this greedily by iteratively discarding points from the full sample. Pseudocode\nfor this process is given in Algorithm] The results are shown in Figure [4] We see that the mode!\nis capable of handling complex arrangements of datapoints. We also see that it can select sensible\nsubsets of a dataset as a summary."}, {"section_index": "10", "section_name": "5.3. OMNIGLOT", "section_text": "Next we work with the OMNIGLOT data (Lake et al.||2015). This contains 1628 classes of hand.\nT\n\nwritten characters but with just 20 examples per class. This makes it an excellent test-bed for transfer\n/ few-shot learning. We constructed datasets by splitting each class into datasets of size 5. We trair\nFigure 3: An image from\nMNIST on the left, transformed\nto a set of 50 (a, y) coordinates,\nshown as a scatter plot on the\nright.\nFigure 4: Conditioned samples from spatial MNIST data. Blue and red digits are the input sets,\nblack digits above correspond to samples given the input. Red points correspond to a 6-sample\nsummary of the dataset\noO}\n-|o\n-|0\n\nQO\n\nwe]\n\n[o]4] \u00a2])\n\n<a] iw\n%] te |\u00bb |S]\n\nXJ\n\n~o fee] 4 |e [4 [nJ~|S\nNo [aye] 4 [2/5] 95]\u2014]0\n\nso [29] Ss] 4] S/b3/%/~\u2014|O\nx9 [eo|-a | >] & | t]] |] 9\n\n59 [Ss [a] fr |-C[d/N-|0\nDIBIE =\nFigure 5: Few-shot learning Left: Few-shot learning from OMNIGLOT to MNIST. Left rows are\ninput sets, right rows are samples given the inputs. Right: Few-shot learning from with OMNIGLOT\ndata to unseen classes. Left rows are input sets, right rows are samples given the inputs. Black-white\ninversion is applied for ease of viewing.\non datasets drawn from 1200 classes and reserve the remaining classes to test few-shot sampling and\nclassification. We created new classes by rotating and reflecting characters. We resized the images\nto 28 x 28. We sampled a binarization of each image for each epoch. We also randomly applied\nthe dilation operator from computer vision as further data augmentation since we observed that the\nstroke widths are quite uniform in the OMNIGLOT data, whereas there is substantial variation in\nMNIST, this augmentation improved the visual quality of the few-shot MNIST samples consider:\nably and increased the few-shot classification accuracy by about 3 percent. Finally we used \u2018sample\ndropout\u2019 whereby a random subset of each dataset was removed from the pooling in the statistic net-\nwork, and then included the number of samples remaining as an extra feature. This was beneficial\nsince it reduced overfitting and also allowed the statistic network to learn to adjust the approximate\nposterior over c based on the number of samples.\nWe used a single stochastic layer with 16 units for z, and 512 units for c. We used a shared convolu-\ntional encoder between the inference and statistic networks and a deconvolutional decoder network.\nFull details of the networks are given in Appendix|B.1| The decoder used a Bernoulli likelihood.\nAs a further test we considered few-shot classification of both unseen OMNIGLOT characters and\nMNIST digits. Given a sets of labelled examples of each class Do,..., D9 (for MNIST say), we\ncomputed the approximate posteriors q(C|D;;\u00a2) using the statistic network. Then for each test\nimage x we also computed the posterior q(C|x; @) and classified it according to the training dataset\nDj; minimizing the KL divergence from the test context to the training context. This process is\ndescribed in Algorithm |4] We tried this with either 1 or 5 labelled examples per class and either\n5 or 20 classes. For each trial we randomly select K classes, randomly select training examples\nfor each class, and test on the remaining examples. This process is repeated 100 times and the\n\nresults averaged. The results are shown in Table}!| We compare to a number of results reported\nin |Vinyals et al.| (2016b) including [Santoro et al.[(2016) and [Koch] (2015). Overall we see that\n10] @ | |=|E[s |A]B [5\nREIOR EISELE\n\n3[s [5 [5] 5 kaa\nele\n\npred i bie\n\nre\n\nNal) 0/O/S/ of\nI/7\\7 7 9 RROIE\nola [ol=[a}ule [4 [elo\n[8 [col = [EH] [A [810\nDESEEDEDES\nsofa val dlyle]s [B]o\nHi dglwol\u2014ld |uje}r iS]\n40] A ofS lOJc 4] 2 | A/S]o\nva |o|=|O] e] A[8) 5\n* [a jol=fO]4]e]4 [e)5\nIn Figure [5] we show two examples of few-shot learning by conditioning on samples of unseen\ncharacters from OMNIGLOT, and conditioning on samples of digits from MNIST. The samples\nare mostly of a high-quality, and this shows that the neural statistician can generalize even to new\ndatasets.\nthe neural statistician model can be used as a strong classifier, particularly for the 5-way tasks\nbut performs worse than matching networks for the 20-way tasks. One important advantage tha\nmatching networks have is that, whilst each class is processed independently in our model, th\nrepresentation in matching networks is conditioned on all of the classes in the few-shot problem\nThis means that it can exaggerate differences between similar classes, which are more likely t\nappear in a 20-wav problem than a 5-wav problem.\nTable 1: The table shows the classification accuracies of various few-shot learning tasks. Models are\ntrained on OMNIGLOT data and tested on either unseen OMNIGLOT classes or MNIST with vary-\ning numbers of samples per class (K-shot) with varying numbers of classes (K-way). Comparison:\n\nare to[Vinyals et a.|(20T6b) (Matching) [Santoro et al]{2016} (MANN) and|Koch] (2015) (Siamese)\n5-shot MN ompleteness.\n\nresults are included for c\nONS De rea\n\u00a9\n,\n\n5 ee]\nr\n| =:\na\n.\n/ 2\n. \u2018\nFigure 6: Few-shot learning for face data. Samples are from model trained on Youtube Faces\nDatabase. Left: Each row shows an input set of size 5. Center: Each row shows 5 samples from the\nmodel corresponding to the input set on the left. Right: Imagined new faces generated by sampling\ncontexts from the prior. Each row consists of 5 samples from the model given a particular sampled\ncontext.\nFinally, we provide a proof of concept for generating faces of a particular person. We use the\nYoutube Faces Database from|Wolf et al.|(201 1). It contains 3, 245 videos of 1, 595 different people.\nWe use the aligned and cropped to face version, resized to 64 x 64. The validation and test sets\ncontain 100 unique people each, and there is no overlap of persons between data splits. The sets\nwere created by sampling frames randomly without replacement from each video, we use a set size\nof 5 frames. We resample the sets for the training data each epoch.\nOur architecture for this problem is based on one presented in[Lamb et al]20T9). We used a singh\nstochastic layer with 500 dimensional latent c and 16 dimensional z variable. The statistic networl\nand the inference network q(z|x, c; @) share a common convolutional encoder, and the deocder use!\ndeconvolutional layers. For full details see Appendix {B.2| The likelihood function is a Gaussian\nbut where the variance parameters are shared across all datapoints, this was found to make trainin;\nfaster and more stable.\nWe have demonstrated a highly flexible model on a variety of tasks. Going forward our approach wil\nnaturally benefit from advances in generative models as we can simply upgrade our base generative\nmodel, and so future work will pursue this. Compared with some other approaches in the literature\nfor few-shot learning, our requirement for supervision is weaker: we only ask at training time that we\nare given datasets, but we do not need labels for the datasets, nor even information on whether twe\ndatasets represent the same or different classes. It would be interesting then to explore applicatior\nareas where only this weaker form of supervision is available. There are two important limitations t\u00a2\nthis work, firstly that the method is dataset hungry: it will likely not learn useful representations o:\ndatasets given only a small number of them. Secondly at test time the few-shot fit of the generative\nmodel will not be greatly improved by using larger datasets unless the model was also trained or\nsimilarly large datasets. The latter limitation seems like a promising future research direction -\nbridging the gap between fast adaptation and slow training."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded\nby the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the Uni-\nversity of Edinburgh."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Kenji Fukumizu, Le Song, and Arthur Gretton. Kernel Bayes\u2019 rule: Bayesian inference with positiv.\ndefinite kernels. The Journal of Machine Learning Research, 14(1):3753\u20143783, 2013.\nThomas Gartner, Peter A. Flach, Adam Kowalczyk, and Alex J. Smola. Multi-instance kernels. In\nIn Proc. 19th International Conf. on Machine Learning, pp. 179-186. Morgan Kaufmann, 2002.\nSergey loffe and Christian Szegedy. Batch normalization: Accelerating deep network training by re-\nducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine\nLearning, pp. 448-456, 2015.\n[he results are shown in Figure [6] Whilst there is room for improvement, we see that it is possible\no specify a complex distribution on-the-fly with a set of photos of a previously unseen person. The\nsamples conditioned on an input set have a reasonable likeness of the input faces. We also show the\nibility of the model to generate new datasets and see that the samples have a consistent identity and\nvaried poses.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, and Zhifeng Chen et al. TensorFlow:\n\nLarge-scale machine learning on heterogeneous systems, 2015. URL|http://tensorflow.\n\norg/, Software available from tensorflow.org.\nVeronika Cheplygina, David M.J. Tax, and Marco Loog. On classification with bags, groups and\nsets. Pattern Recognition Letters, 59:11 \u2014 17, 2015.\nBai Jiang, Tung-yu Wu, Charles Zheng, and Wing H Wong. Learning summary statistic for approx\nimate Bayesian computation via deep neural network. arXiv preprint arXiv:1510.02175, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin\narXiv: 1412.6980, 2014.\nBrenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning\nthrough probabilistic program induction. Science, 350(6266): 1332-1338, 2015.\nYishu Miao, Lei Yu, and Phil Blunsom. Neural variational inference for text processing. arXin\npreprint arXiv:1511.06038, 2015.\nSinno Jialin Pan and Qiang Yang. A survey on transfer learning. Knowledge and Data Engineering\nIEEE Transactions on, 22(10):1345\u20141359, 2010.\nBarnabas P\u00e9czos, Liang Xiong, Dougal J Sutherland, and Jeff Schneider. Support distribution ma-\nchines. Technical Report, 2012. URL htt /arxiv.org/abs/1202.0302\nRajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. In AJSTATS\npp. 814-822, 2014.\nDanilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One-\nshot generalization in deep generative models. arXiv preprint arXiv: 1603.05106, 2016.\nAdam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One-\nshot learning with memory-augmented neural networks. arXiv preprint arXiv: 1605.06065, 2016.\nDiederik P Kingma and Max Welling. Auto-encoding variational Bayes. In Proceedings of the 2nd\nInternational Conference on Learning Representations (ICLR). number 2014. 2013.\nGregory Koch. Siamese neural networks for one-shot image recognition. Doctoral dissertation,\nUniversity of Toronto, 2015.\nAlex Lamb, Vincent Dumoulin, and Aaron Courville. Discriminative regularization for generative\nmodels. arXiv preprint arXiv: 1602.03220. 2016.\nLars Maalge, Casper Kaae Sgnderby, Sgren Kaae Sgnderby, and Ole Winther. Auxiliary deep gen-\nerative models. arXiv preprint arXiv: 1602.05473, 2016.\nLawrence K Saul and Michael I Jordan. Exploiting tractable substructures in intractable networks\nIn Advances in Neural Processing Systems 8, 1996.\nOriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Match\ning networks for one shot learning. arXiv preprint arXiv: 1606.04080, 2016b.\nLior Wolf, Tal Hassner, and Itay Maoz. Face recognition in unconstrained videos with matchec\nbackground similarity. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Confer:\nence on, pp. 529-534. IEEE, 2011.\nAlgorithm 1 Sampling a dataset of size k\nAlgorithm 2 Sampling a dataset of size k conditioned on a dataset of size m\nAlgorithm 3 Selecting a representative sample of size k\nSg Ue Rte SOP mS 6 Uw Un Ole hy\n\nsample c ~ p(c)\n\nfori = 1tok do\nsample z;,, ~ p(zi|c;6\npals ai Tae\nend for\nsample 2; ~ p(x|zi,1,---, 2i,n,\u00a230)\n\nend for\n\nigj+1s CG 0)\n2x { conv2d 64 feature maps with 3 x 3 kernels and ELU activations }\nconv2d 64 feature maps with 3 x 3 kernels, stride 2 and ELU activations\n2x {conv2d 128 feature maps with 3 x 3 kernels and ELU activations }\nconv2d 128 feature maps with 3 x 3 kernels, stride 2 and ELU activations\n2x { conv2d 256 feature maps with 3 x 3 kernels and ELU activations }\nconv2d 256 feature maps with 3 x 3 kernels, stride 2 and ELU activations\nStatistic network q(c|D;\u00a2) : hi,..., he > be, 0\nInference network q(z|\nLatent decoder network p(z\n3x {fully-connected layer with 256 units and ELU activations }\nfully-connected linear layers to 1, and log a2\nObservation decoder network p(x\n\n0) 20,2 > Ly\nfully-connected linear layers with 4 - 4 - 256 units\n\n2x { conv2d 256 feature maps with 3 x 3 kernels and ELU activations }\ndeconv2d 256 feature maps with 2 x 2 kernels, stride 2, ELU activations\n2x { conv2d 128 feature maps with 3 x 3 kernels and ELU activations }\ndeconv2d 128 feature maps with 2 x 2 kernels, stride 2, ELU activations\n2x { conv2d 64 feature maps with 3 x 3 kernels and ELU activations }\ndeconv2d 64 feature maps with 2 x 2 kernels, stride 2, ELU activations\nconv2d 1 feature map with 1 x 1 kernels, sigmoid activations\nSAR UE RLREREL TE FAT WAY AU WOR Vidsollivativuil\n\nDo,..., DK < sets of labelled examples for each class\na < datapoint to be classified\nN, < q(c|x; \u00a2) {approximate posterior over c given query point}\nfor i = 1 to Kk do\n\nNi = a(elDis \u00a2)\nend for\n9 < argminsDxt (Nil|Ne)\n2x { conv2d 32 feature maps with 3 x 3 kernels and ELU activations }\nconv2d 32 feature maps with 3 x 3 kernels, stride 2 and ELU activations\n2x {conv2d 64 feature maps with 3 x 3 kernels and ELU activations }\nconv2d 64 feature maps with 3 x 3 kernels, stride 2 and ELU activations\n2x { conv2d 128 feature maps with 3 x 3 kernels and ELU activations }\nconv2d 128 feature maps with 3 x 3 kernels, stride 2 and ELU activations\n2x { conv2d 256 feature maps with 3 x 3 kernels and ELU activations }\nconv2d 256 feature maps with 3 x 3 kernels, stride 2 and ELU activations\nEe ERE EWE PV AIS YU) OS OT Mz Ue\n\nfully-connected layer with 1000 units and ELU activations\nfully-connected linear layers to ju, and log a?\nObservation decoder network p(z|c, z;0) : c,z > [Ug\nconcatenate z and C\n\nfully-connected layer with 1000 units and ELU activations\nfully-connected linear layer with 8 - 8 - 256 units\n\n2x { conv2d 256 feature maps with 3 x 3 kernels and ELU activations }\ndeconv2d 256 feature maps with 2 x 2 kernels, stride 2, ELU activations\n2x { conv2d 128 feature maps with 3 x 3 kernels and ELU activations }\ndeconv2d 128 feature maps with 2 x 2 kernels, stride 2, ELU activations\n2x { conv2d 64 feature maps with 3 x 3 kernels and ELU activations }\ndeconv2d 64 feature maps with 2 x 2 kernels, stride 2, ELU activations\n2x { conv2d 32 feature maps with 3 x 3 kernels and ELU activations }\ndeconv2d 32 feature maps with 2 x 2 kernels, stride 2, ELU activations\nconv2d 3 feature maps with 1 x 1 kernels. sigmoid activations"}]
SJNDWNOlg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Image retrieval is an important problem both for academic research and for industrial applications\nAlthough it has been studied for many years (Sivic & Zisserman| 2003} Philbin et al.| 2007} Tolia\u2019\nfet al.|[2015), it is still a challenging task. Generally, image retrieval is divided into two groups. Thi\nfirst one is the category-level image retrieval (Sharma & Schiele} 2015), in which an image in th\ndataset is deemed to be similar to the query image if they share the same class or they are similar it\nshape and local structures. The other group is the instance-level image retrieval 2015)\nin which an image is considered to match the query if they contain the same object or the sam\nscene. The instance-level image retrieval is harder in that the retrieval method need to encode thi\nlocal and detailed information in order to tell two images apart, e.g., the algorithm should be abl\nto detect the differences between the Eiffel Tower and other steel towers although they have simila\nshapes. In this paper, we focus on the instance-level image retrieval.\nTraditionally, visual instance retrieval is mainly addressed by the BoF (bag of features) based meth-\nods using the local feature descriptors such as SIFT 2004). In order to boost the retrieval\n\nperformances, post-processing techniques such as query expansion (Chum et al.|{2007) and spatial\nverification are also employed.\nWith the decisive victory (Krizhevsky et al.|{2012) over traditional models in the ImageNet (Rus\nimage classification challenge, convolutional neural networks (Lecun et al.\n\ncontinue to achieve remarkable success in diverse fields such as object detection (Liu et al.\n2015} |Shaoging Ren} |2015), semantic segmentation and even image style trans\nfer (Gatys et al.]/2016). Networks trained on the Imagenet classification task can generalize quit\nwell to other tasks, which are either used off-the-shelf (Razavian et al.| or fine-tuned on th\ntask-specific datasets (Azizpour et al} 2014} Long et al.|/2015). Inspired by all these, researcher:\nin the field of image retrieval also shift their interest to the CNNs. Their experiments have show1\npromising and surprising results\nwhich are on par with or surpass the performances of conventional methods like BoF and VLAL\n\n(vector of locally ageregated descriptors) (J\u00e9oou et al. /2010! Arandielovie & Zisserman!!2013) ."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Despite all these previous advances (Babenko et al. (2014} Babenko & Lempitsky) |2015} |Tolia\n2015) on using CNNs for image feature representation, the underlying factors that contribut\nto the\n\nsuccess of off-the-shelf CNNs on the image retrieval tasks are still largely unclear and un\nexplored, e.g., which layer is the best choice for instance retrieval, the convolutional layer or th\nfully-connected layer? What is the best way to represent the multi-scale information of an image.\nClarifying these questions will help us advance a further step towards building a more robust anc\naccurate retrieval system. Also in situations where a large numbers of training samples are not avail\nable, instance retrieval using unsupervised method is still preferable and may be the only option.\nIn this paper, we aim to answer these questions and make three novel contributions. Unlike pre-\nvious papers, we explicitly choose five factors to study the image representations based on CNNs\nand conduct extensive experiments to evaluate their impacts on the retrieval performances. We also\ngive detailed analysis on these factors and give our recommendations for combining them. Dur-\ning experiments, we borrow wisdoms from literatures and evaluate their usefulness, but find that\nthey are not as effective as some of the simpler design choices. Second, by combining the insights\nobtained during the individual experiments, we are able to propose a new multi-scale image rep-\nresentation, which is compact yet effective. Finally, we evaluate our method on four challenging\ndatasets, i.e., OxfordSk, Paris6k, Oxford105k and UKB. Experimental results show that our method\nis generally applicable and outperforms all previous methods on compact image representations by\na large margin.\nMulti-scale image representation. ( propose the spatial pyramid matching\napproach to encode the spatial information using BoF based methods. They represent an image us-\ning a pyramid of several levels or scales. Features from different scales are combined to form the\nimage representation in such a way that coarser levels get less weight while finer levels get more\nweight. Their argument is that matches found in coarser levels may involve increasingly dissimilar\nimage features. In our paper, we also explore the multi-scale paradigm in the same spirit using the\nconvolutional feature maps as the local descriptors. We find that the deep features from the convolu-\ntional feature maps are distinct from the traditional descriptors: the weighted sum of different level\nof features shows no superior performances than a simple summation of them. {Kaiming et al.|(2\ndevise an approach called SPP (spatial pyramid pooling). In SPP, feature maps of the last convo-\nlutional layer are divided into a 3 or 4 scale pyramid. First the regional features in each scale are\nconcatenated, then the scale-level features are concatenated to a fixed length vector to be forwarded\nto the next fully-connected layers. We find that this strategy is ineffective for unsupervised instance\nretrieval, leading to inferior performances compared to other simple combination methods (see the\npart about multi-scale representation in section|5.2]for more details.).\n[mage representation using off-the-shelf CNNs. |Gong et al. propose the MOP (multi-\nscale orderless pooling) method to represent an image in which VLAD is used to encode the level\n2 and level 3 features. Then features from different scales are PCA-compressed and concatenated\no form the image features. This method is rather complicated and time-consuming. At the same\nime, (2014) use Alexnet (2012) trained on the Imagenet 1000-class\n-lassification task and retrain the network on task-related dataset. The retraining procedure gives a\n,00st to the retrieval performances. Instead of using the output of the fully-connected layers as the\nmage feature representations,/Babenko & Lempitsky|(2015) use the output feature maps of last con.\nvolutional layer to compute the image features. Recently, instead of sum-pooling the convolutional\neatures, {Tolias et al.|( use max-pooling to aggregate the deep descriptors. Their multi-scale\nnethod, called R-MAC (regional maximum activation of convolutions), further improves the pre-\nvious results on four common instance retrieval datasets. Our work differs from these papers in\nhat we explicitly explore the various factors that underpin the success of unsupervised instance re-\nrieval, which have not been fully explored and analysed. By carefully choosing the different setting\n\u2018or each factor and combining them in a complementary way, we show that a large improvement can\nde achieved without additional cost."}, {"section_index": "2", "section_name": "3.1 CNN FEATURES FOR INSTANCE RETRIEVAL", "section_text": "In this paper, we are mainly interested in extracting compact and discriminative image features using\nthe off-the-shelf CNNs in an efficient way. For a given image J, we simply subtract the mean value\nof the RGB channels from the original image and do not do other sophisticated preprocessing. Ther\nthe image is fed into the convolutional network and goes through a series of convolutions, non-lineat\nactivations and pooling operations. The feature activation maps of a certain layer can be interpretec\nas the raw image features, based on which we build the final image features. These feature maps\nform a tensor of size K x H x W, where K is the number of feature channels, and H and W are\nheight and width of a feature map. Each feature map represents a specific pattern which encodes\na small part of information about the original image. If we represent the set of feature maps as\nF = {F;},i =1,2,..., K, where F; is the i\u201d activation feature map, then the most simple image\nfeature is formulated as:\nf\n\n[ft fase. fis SK]"}, {"section_index": "3", "section_name": "3.2 IMPACTING FACTORS ON PERFORMANCE", "section_text": "Feature aggregation and normalization. After the feature maps of a certain layer are obtained.\nit is still challenging to aggregate the 3-dimensional feature maps to get compact vector represen-\ntations for images. Previous papers use either sum-pooling (Babenko & Lempitsky (2015) or max-\n\npooling (Tolias et al, followed by /2-normalization. Sum-pooling over a particular feature\n\nmap F\u2019 is expressed as\nH W\n\nfi= 00 Almn),i \u20ac {1,2,...,K}\n\nm=1n=1\nwhere m,n are all the possible values over the spatial coordinate of size H x W. In this paper,\nfor the first time, different combinations of aggregation and normalization methods (/2 and J, in the\nmanner of RootSIFT (Arandjelovi\u00e9 & Zisserman| 2012)) are evaluated and their results are reported.\nJutput layer selection. has shown that image features aggregated fror\nhe feature activation maps of certain layers have interpretable semantic meanings. \\Gong et al\n2014) and (2014) use the output of the first fully-connected layer to obtain th\nmage features, while|Babenko & Lempitsky (2015) and|Tolias et al. (2015) use the output featur\nnaps of the last convolutional layer. But these choices are somewhat subjective. In this paper, w\nxtract dataset image features from the output feature maps of different layers and compare thei\netrieval performances. Based on the finding in this experiment, we choose the best-performin;\n\nayer and also come up with a layer ensemble approach which outperforms state-of-the-art method\nsee sectio:\nImage resizing. Famous models such as Alexnet (Krizhevsky et al.|/2012) and VGGnet (Simonyar\n[& Zisserman] |2014) all require that the input images have fixed size. In order to meet this require\nment, previous papers (Gong et al.| 2014} Babenko & Lempitsky| 2015) usually resize the inpu\nWhen we employ off-the-shelf CNNs for the task of instance-level image retrieval, a natural questior\ns: what kind of design choices should we make in order to make full use of the representationa\nower of existing models? In this section, we summarize the five factors that may greatly impac\nhe performance of the final image retrieval system. In sectio: we will show our experimenta\nesults on each key factor. Before we delve into the impact actors, first we will give a brie\nntroduction about how to represent an image using the activation feature maps of a certain layer.\n[n the above equation] [1] f; is obtained by applying the feature aggregation method (see section\nover the i\u201d feature map F;. Throughout this paper, we use feature maps after the non-linear acti-\nvations (ReLU) so that the elements in each feature map are all non-negative. We also experiment\nwith feature maps prior to ReLU, but find that they lead to inferior performances. After the image\nfeature representation is obtained, post-processing techniques such as PCA and whitening can be\nfurther applied.\n= max F;(m,n)\nmn\nFigure 1: An illustration of multi-scale representation of an image. The whole image is divided into 3\nlevels from the coarsest (level 1) to the finest (level 3). At each level, the image is divided into different number\nof equal-sized regions.\nMulti-scale feature representation. Unlike local feature descriptors such as SIFT\nhe feature vector extracted from the deep convolutional networks for an image is a global descriptc\nwhich encodes the holistic information. When used for image retrieval, this kind of features sti\nack the detailed and local information desired to accurately match two images. Inspired by spatiz\noyramid matching (Lazebnik et al} 2006) and SPP (Kaiming et al.| 2014), we explore the feasibilit\nof applying this powerful method to obtain discriminative image features. An image is represente\nxy a L-level pyramid, and at each level, the image is divided evenly into several overlapping c\n10n-overlapping regions. The vector representations of these small regions are computed, then th\negional vectors are combined to form the image feature vectors. The single scale representation c\nin image is just a special case of the multi-scale method in which the number of level L equals 1.\nshows an example of 3 level representations of an image. The time cost of re-feeding those\nsmall regions into the network to compute the regional vectors would be huge, thus unacceptable\nfor instance retrieval tasks. Inspired by the work of 2015) and (2015), we\nassume a linear projection between the original image regions and the regions in the feature maps\nof a certain layer. Then the regional feature vectors can be efficiently computed without re-feeding\nthe corresponding image regions. In section |5.2| various settings for the multi-scale and scale-\nlevel feature combination methods are explored and their retrieval performances are reported and\nanalysed.\nPCA and whitening. Principal Component Analysis (PCA) is a simple yet efficient method for\nreducing the dimensionality of feature vectors and decorrelating the feature elements. Previous\nwork 2010) has shown evidences that PCA and whitened features\ncan actually boost the performances of image retrieval. In this paper, we further investigate the\nusefulness of PCA and whitening within our pipeline and give some recommendations.\nimages to the fixed size. We postulate that the resizing operation may lead to the distortion of im-\nportant information about the objects in the natural images. Ultimately, this kind of operation may\nhurt the discriminative power of image features extracted from the network, thus degrading the re-\ntrieval performances. For the task of image retrieval, we think it is best to keep the images their\noriginal sizes and feed them directly to the network whenever possible. In this paper, three image\nresizing strategies are explored:\n> Both the height and width of the dataset images are set to the same fixed value (denoted as\ntwo-fixed).\n\n> The minimum of each dataset image\u2019s size is set to a fixed value. (The aspect ratio of the\noriginal image is kept.) (denoted as one-fixed).\n\n> The imaoes are kent their oricinal cizvec (denoted ac free)"}, {"section_index": "4", "section_name": "4 IMPLEMENTATION", "section_text": "We use the open source deep learning framework Caffe 4) for our whole experiments\nThe aim of this research is to investigate the most effective ways xploit the feature activations o\nexisting deep convolutional models. Based on past practices for networks to go deeper (Krizhevsk\n2015), a consideration fo\nmoderate computational cost, and also the results from|Tolias et al.|(2015) that deeper networks worl\nbetter than shallower ones, we decide to use the popular VGG-19 model (Simonyan & Zisserman\n2014) trained on ImageNet as our model.\nNetwork transformation. The original VGG-19 network only accepts an image of fixed size (224 x\n224), which is not the optimal choice when extracting image features for retrieval tasks. In order for\nthe network to be able to process an image of arbitrary size (of course, the image size can not exceed\nthe GPU\u2019s memory limit) and for us to experiment with different input image resizing strategies, we\nadapt the original VGG-19 network and change the fully-connected layers to convolutional (Long\n5) layers. For more details about network transformations, see appendix[A]\nIn this section, we first introduce the datasets used and the evaluation metrics. Then we report\nour experimental results for different impacting factors and give detailed analysis. In the last part.\nwe show the performance of our method considering all these impacting factors and compare out\nmethod with the state-of-the-art methods on four datasets."}, {"section_index": "5", "section_name": "5.1 DATASETS AND EVALUATION METRICS", "section_text": "The Oxford105k~| dataset contains the original Oxford5k dataset and additional 100,000 im:\nages (Philbin et al.| from Flickr. The 100,000 images are disjoint with the Oxford5k datase\nand are used as distractors to test the retrieval performance when the dataset scales to larger size\nWe use the same evaluation protocol as the Oxford5k on this dataset.\nThe UKB dataset (Nist\u00e9r & Stew\u00e9nius||2006) consists of 10200 photographs of 2550 objects, each\nobject having exactly 4 images. The pictures of these objects are all taken indoor with large variation\nin orientation, scale, lighting and shooting angles. During experiment, each image is used to query\n\nthe whole dataset. The performance is measured by the average number of same-object images in\nthe top-4 results.\nIn this section, we report the results of experiments on the impact of different factors and analys\ntheir particular impact. The experiments in this section are conducted on the Oxford5k dataset.\nFeature aggregation and normalization. In this experiment, we compare the different combina-\ntions of feature aggregation (sum-pooling and max-pooling) and normalization methods (lz and 1)\n\" Following conventions, 20 corrupted images from this dataset are removed, leaving 6392 valid images.\nThe image named \u201cportrait_000801.jpg\u201d was corrupted and manually removed from this dataset.\nThe Oxford5k dataset (Philbin et al.||2007) contains 5062 images crawled from Flickr by using\n\n11 Oxford landmarks as queries. A total of 11 groups of queries \u2014 each having 5 queries with\ntheir ground truth relevant image list, are provided. For each query, a bounding box annotation is\nalso provided to denote the query region. During experiment, we report results using the full query\nimages (denoted as full-query) and image regions within the bounding boxes of the query images\n(denoted as cropped-query). The performance on this dataset is measured by mAP (mean average\nprecision) over all queries.\nThe Paris6k dataset (Philbin et al.||2008) includes 6412 image\u00a2!| from Flickr which contains 11\n\nlandmark buildings and the general scenes from Paris. Similar to the OxfordSk dataset, a total of 55\n\nqueries belonging to 11 groups and the ground truth bounding boxes for each query are provided .\nThe performance is reported as mAP over 55 queries.\nTable 1: Comparison between different combi-\nnations of feature aggregation and normaliza-\ntion methods.\nMethod | full-query | cropped-query\nmax-ly, 52.4 48.0\nsum-lz 58.0 52.6\nsum-ly, 60.3 56.3\nmazx-lo 60.1 53.5\nin terms of their retrieval performances. We use features from the layer conv5_4 with the free inpu\nimage size. The results (%) are shown in Table[I] Sum-pooling followed by J; normalization lead:\nto slightly better results than the other combinations, especially for the cropped-query. However\nafter preliminary experiment with a multi-scale version of sum-l, and maz-lz, we find that maz-l.\nis much better than swm-l,. For example, employing a 4 level representation of images in the Ox\nford5k dataset, for the case of full-query, we find that the mAP for the maz-lz method is 65.1, whil\nthe mAP for sum-l, is only 51.3 (even lower than the single scale representation). Base on thes\nresults, we stick to maz-le in computing the final image features.\nOutput layer selection. In order to verify their feasibility for instance retrieval, we extract fron\nthe network the output feature maps of different layers and aggregate them to get the image featur\nvectors. We evaluate the performances using features from layer conv3_3 up to the highest fe7-com\nlayer (except the pooling layers, i.e. pool3, pool4 and pool5). Single-scale representations of th\ndataset images are used in this experiment.\nFigure [2|shows the retrieval performances of image features corresponding to different layers. T!\n\nretrieval performances for both the full and cropped queries increase as the layer increases from\nlower layer conv3_3 to higher layers and plateau in layer conv5_4 and fc6-conv, then the perfor-\nmances begin to decrease as the layers increase to fc7-conv. The result shows that features from\n\nlower layers such as conv3_3 and conv3_4 are too generic and lack the semantic meanings of t\n\nobject in the image, thus rendering them unsuitable for instance retrieval. On the other hand, fea-\ntures from the highest layer (fc7-conv) contain the semantic meaning of objects but lack the detailed\nand local information needed to match two similar images. The best results are obtained in layer\nconv5_4 (0.601) and fc6-conv (0.618), where the feature vectors combine both the low-level detailed\n\ninformation and high level semantic meanings of the image. Based on these observations and t\nrequirement for keeping the image features compact, we mainly focus on image features from tl\nlayer conv5_4 (dimensionality = 512 compared to 4096 of layer fc6-conv).\n\nne\n\nne\n\nne\n\nne\nFigure 2: Performance comparison between different layers. This experiment is conducted using the free\ninput image size.\nImage resizing. We experiment with 3 kinds of image resizing strategies which are detailed in\nsection [3.2] We use grid search to find the optimal size for the two-fixed and one-fixed strategy. As\nis shown in Table [2] the free input strategy outperforms or is close to the other two strategies: it\nresizing strategies. The numbers in the parenthe-\nses denote the sizes in which the maximum mAPs\nare achieved.\nMethod full-query | cropped-query\n\ntwo-fixed | 55.5 (864) 38.7 (896)\n\none-fixed | 59.0 (800) 39.3 (737)\nfree 58.0 52.6\n0.64\n0.56}\n0.48 |\n\n< 0.40\n\n< 0.40}\n0.324\n0.24\n0.16 F\n\nfull-query\n\n--\u2014 cropped-query\n\ns b BS r 3\nGP? oP eth gt th\nC C C C C C\n\nS . y 2 bd\nbe \\o- o7 (o7 \\o-\nos co co co\n\nlayer names\n\n_ _\noF a\noo \u20ac\n-\u2014\u2014 full-query\n\nE : : : : *--\u00bb cropped-query\n\n> & S) N)\non? oe 4 oe\no oe \u00ab\u00a2 \u00ab\nperforms especially well in the cropped-query case. This experiment shows that changing the image\naspect ratio (two-fixed) distorts the image information, thus reducing the performance dramatically.\nThe one-fixed way is better than the two-fixed method. But information loss still occurs due to the\nresizing operation. The free method is able to capture more natural and un-distorted information\nfrom the images, which explains its superior performance over the other two methods. It is best to\nkeep the images their original sizes for the instance retrieval tasks.\nThe benefit of multi-scale representation. In our multi-scale approach, the regional vectors from\neach scale are simply added together and /2-normalized to form the scale-level feature vectors. Ther\nfeatures from different scales are combined and /2-normalized to form the image representations. Ir\nfact, we also experimented with two methods which concatenate features from different scales. The\nfirst method is in same vein to spatial pyramid pooling 4), i.e., region-level a:\nwell as the scale-level features are all concatenated to form a high dimensional vector. In the seconc\nmethod, region-level features are added while scale-level features are concatenated. We find tha\nthese two methods all lead to inferior results. The performance drop for the first in the case o!\ncropped-query can be as large as 41%. The high dimensionality of the concatenated features (large!\nthan 1.5k) will also lead to longer running times. Considering all these, we do not use concatenatior\nof features in the following experiments.\nTable 3: Multi-scale representation: comparison between different methods. \u201coverlap\u201d denotes whether\nthe regions in each level (see Figure[I) have some overlapping areas. \u201c\u2018s2\u201d,\u201c\u2018s3\u201d mean that overlap occurs in\nlevel 2 or 3. \u201cweighing\u201d means if the features from each level are added using same weight or different weight.\n\u201cversion\u201d means the different choice of the number of regions in each scale.\nscale | overlap | weighing | version | full-query | cropped-query\n(al) 2 x x - 63.5 59.0\n(a2) 2 x v - 63.9 61.0\n(b1) 3 x x - 64.2 60.9\n(b2) 3 v - 62.6 61.0\n(b3) 3 x - 64.8 60.8\n(cl) 4 x vi 65.1 61.4\n(c2) 4 v vl 64.8 60.7\n(c3) 4 x vl 65.5 60.8\n(c4) 4 x v2 65.9 61.5\n(c5) 4 v v2 65.4 61.2\n(c6) 4 x v3 64.5 61.3\n(c7) 4 x v3 65.8 62.2\n(c8) 4 x v3 66.3 62.6\nWe conduct extensive experiments to decide the best configurations for the multi-scale approach anc\nreport our results in Table 3] First, we explore the impact of the number of scales on the retrieva\nperformances. For the 2 and 3 scale representations, The region number for each level are {1 x 1\n2x2}, {1x 1, 2x 2,3 x3}. For the 4 scale representation, 3 versions are used and they differ in the\nnumber of regions in each scale: for \u201cv1\u201d, \u201cv2\u201d, and \u201cv3\u201d, the number of regions are {1 x 1, 2 x 2\n3x 3,4x\u00ab 4}, {1x 1,2x2,3x 3,5 x 5} and {1 x 1, 2x 2,3 x 3,6 x 6}. Table[3](a1)(b1)(c6) show\nthe performances of using 2, 3, and 4 scales to represent the dataset images, respectively. Clearly\nmore scale levels improve the results and in the case of cropped-query, increase the performance by\nan absolute 2%.\nWe also conduct experiments to find whether the weighing of different scales leads to improved\nperformance. The weighing method for features from different scales is similar to the manner of\nspatial pyramid matching \u2014 features from coarser level are given less weight\nwhile features from the finer levels are given more weight. Suppose the features of different scales\nfor an L scale representation are f!, f?,..., f*, then the image representation f is expressed as:\n, Linw tig\nf= gett +m s.\ni=2\nMore details can be found in|Lazebnik et al.|(2006). Comparing the results of row (al) and (a2), it\n\nseems that weighing different scales leads to better performance. But after more experiments, we\nfind that the weighing method generally leads to inferior results as the number of scales increase,\nmAP.\n\n0.75\n\n0.65\n\n0.55\n\n0.45\n\n0.35\n\n\u2014 crop-paris ||\n\u2014 crop-self\na\u2014 full-paris\na\u2014a full-self\n\ni f i i f f i n n\n\n16 80 144 208 272 336 400 464 528\n\nnumber of principal component reserved.\nmAP.\n\n0.75\n\n0.65}\n\n0.55}\n\n0.45+ 7\n-\u2014 crop-paris\n\u2014 crop-self\n\n0.357 a\u2014a full-paris\na\u2014a full-self\n\n0.25 it 1 it it\n\ni i i I n\n16 80 144 208 272 336 400 464 528\nFigure 3: The number of principal component reserved VS mAP. We show the results of full and cropped\nquery using the PCA and whitening matrix learned from the Oxford5k itself and Paris6k, denoted as \u201cfull-self\u201d\u2019,\n\u201cfull-paris\u201d and \u201c\u2018crop-self\u201d, \u201ccrop-paris\u2019\u2019.\nNext, we look into the issue of overlapping between different scales and try to verify its usefulness.\nFor each scale and its different versions, we set some overlapping areas between the neighboring\nregions in either one or two scales of the pyramid (For the exact configurations of overlap in all cases\nin Table [3] see appendix |B] for the complete descriptions). From the row pair (b1)(b3) and (c1)(c3),\nwe can see that overlap increase the performance for full-query but decrease a little the performance\nfor cropped-query. But for 4 scale v3 (note the pair(c7)(c8)), we see a consistent improvement for\n\nboth the full and cropped queries. So we decided to use overlap in level 2 and 3 in computing our\nfinal features.\nTable 4: The impact of PCA and whitening. \u201cPCA on self\u201d and \u201cPCA on Paris\u201d mean that the correspondin;\nfeatures are post-processed by the PCA and whitening matrices learned on the OxfordSk and Paris6k datasets\n\nrespectively. The numbers in the parentheses indicate the dimensionality of features used for obtaining th\ncorresponding results.\nPCA and whitening. We perform PCA and whitening for the features extracted from the Oxford5h\ndataset using the PCA and whitening matrix learned from the Oxford5k or the Paris6k dataset anc\n[,-normalize these features to get the final image representations.\nThe retrieval results for 3 groups of features (from Table B{b3)(c1)(c8)) are shown in Table\nClearly, PCA and whitening lead to better performances. For all 3 groups of features, PCA and\ne.g., compare the results of row pair(b1)(b2) and (cl)(c2). These results suggest that deep features\nare different from the traditional local feature descriptors such as SIFT. We should exercise with\ncaution when we apply the traditional wisdom found in SIFT to the deep convolutional descriptors,\n\nwhich is also suggested in|/Babenko & Lempitsky 2015). Based on the results of this experiment,\nno weighing methods are used in computing our final image feature representations.\nFeature full-query | cropped-query\n\n3scale_overlap, original 64.8 60.8\n\nscale_overlap, PCA on self 65.4(80) 60.9(112)\n\nscale_overlap, PCA on Paris 70.6(464) 67.3(480)\nAscale_v3_overlap(s3), original 65.1 61.4\nAscale_v3_overlap(s3), PCA on self 66.9(80) 61.9(96)\nAscale_v3_overlap(s3), PCA on Paris 72.3(464) 70.8(496)\nAscale_v3_overlap(s2,s3),original 66.3 62.8\nAscale_v3_overlap(s2,s3), PCA on self 69.0(80) 63.9(144)\nAscale_v3_overlap(s2,s3), PCA on Paris | 73.2(496) 71.2(448)\nTable 5: Comparison with state-of-the-art methods. \u201csingle\u201d means multi-scale features from single layer\n(conv5-4) are used. \u201csingle, compression\u201d uses the same features but compresses them to get the best perfor-\nmances. \u201clayer ensemble\u201d combines the similarity score from layer conv5_4 and fc6-conv. The dimensionality\nof the combined feature is set to 1024 for compactness considerations. All our methods use PCA and whitening.\nours (single)\n\nours (single, compression)\nours (layer ensemble)\nwhitening on the same dataset lead to insignificant improvement both in the case of full and croppec\nquery. But after doing PCA and whitening on the Paris6k dataset, the results for both the full anc\ncropped queries improve greatly. In fact, the improvement for the case of cropped-query is evet\nmore surprising. For example, for the third feature group, the improvement are 10.4% and 13.4%\nfor the full and cropped queries. It should also be noted that as the the number of principal compo\nnent reserved increases, the performance for \u201cPCA on self\u201d and \u201cPCA on Paris\u201d differs greatly. As i\nshown in Figure[3} the performance for the former peaks at a relatively low dimension (around 100\nand begins to decrease, while for the latter, the performance increases as the number of principa\ncomponent gets larger and then plateaus.\nDo the above results mean that we should always compute the PCA and whitening matrix from any\ndatasets other than the query dataset itself? The short answer is no. We find that for UKB, learning\nthe PCA and whitening matrix on the OxfordSk dataset shows inferior results compared to learning\nthe PCA and whitening matrix on UKB itself (about 2% drop in accuracy). This may be due to the\nlarge differences between the images of the two datasets as the Oxford5k dataset are mainly images\nof buildings while the images in UKB are mainly small indoor objects. We therefore recommend\nlearning the PCA and whitening matrix on a similar dataset to achieve good performances."}, {"section_index": "6", "section_name": "5.3. COMPARISON WITH OTHER METHODS", "section_text": "Based on the previous experimental results and our analysis of different impacting factors on th\nretrieval performances, we propose a new multi-scale image feature representation. For a giver\nimage in the dataset, the whole process of image feature representation is divided into two steps\nFirst, the input image is fed into the network without the resizing operation (the free way) and |\n4-scale feature representation is built on top of the feature maps of layer convS_4. During the multi\nscale representation step, max-pooling of feature maps are used and regional vectors from the sam\nscale are added together and /-normalized. After that, features from different scales are summe:\nand /-normalized again. The second step involves applying the PCA and whitening operations o1\nfeatures from the first step. The PCA and whitening matrix used are either learned from differen\nor same dataset: specifically, for the Oxford5k and Oxford105k, it is learned in the Paris6k, whil\nfor Paris6k and UKB, it is learned on Oxford5k and UKB respectively. The final PCA and whitene:\nimage features are used for reporting our method\u2019s performances.\nLayer ensemble. Inspired by previous work on model ensemble to boost the classification perfor-\nmances (Krizhevsky et al} 2012} /Simonyan & Zisserman| |2014), we consider fusing the similarity\nscore from different layers to improve the retrieval performances. Specifically, for two images, theit\nsimilarity score is computed as the weighted sum of the scores from different layers (these weights\nsum to 1 so that overall similarity score between two images are still in the range [0, 1].). We have\nevaluated various combination of layers to see their performances and find that best performance\nis achieved by combining the score from conv5_4 and fc6-conv. For the fc6-conv features of an\nimage, we use a 3-scale representation as the size of output feature maps are already very small.\nOxford5k\n\nParis6k\n\nOxford105k\n\nmethod D full cropped | full |] cropped | full | cropped UKB\nHao & Zisserman|{2014| 128 - 43.3 - - - 35.3 3.40\nZeer 128 - 44.8 - - - 37.4 -\n[\u00e9gou & Zissermanf2014) | 1024] - | 56.0 - - - | 502 3.51\n256 | 53.3 - 67.0 - 48.9 - 3.38\n512 | 55.7 - - - 52.2 - 3.56\n256 | 58.9 53.1 - - 57.8 50.1 3.65\n256 | 62.5 63.5 72.0 73.5 -\n512 - 66.8 - 83.0 - 61.6 -\nours (single) 512 | 73.0 70.6 82.0 83.3 68.9 65.3 3.75\nours (single, compression) - 73.2 71.2 83.0 84.0 68.9 65.8 3.76\nours (layer ensemble) 1024] 75.6 73.7 85.7 85.9 71.6 69.2 3.81\nThe fc6-conv features are compressed to low dimensional vectors for faster computation. Our laye:\nensemble achieves 75.6% and 73.7% on OxfordSk for the full and cropped queries respectively\nshowing a large improvement over previous methods. This suggests that features from the fc6-com\nand conv5_4 are complementary. See Table|5|for the complete results on all four datasets.\nComparison. We compare the performance of our method with several state-of-the-art methods\nwhich use small footprint representations and do not employ the complicated post-processing tech-\n\nniques such as geometric re-ranking (Philbin et al.||2007) and query expansion (Arandjelovi\u00e9 &\n(Zisserman| |2012). The results are shown in Table [5] In all the datasets and different scenarios\n\n(full or cropped), our method achieves the best performance with comparable cost. For Oxford5k\n(cropped) and UKB dataset, the relative improvement of our best results over previous methods\n\n(from (2015) and|Babenko & Lempitsky|(2015)) are 10.3% and 4.4%.\nIn this paper, we focus on instance retrieval based on features extracted from CNNs. we have con-\nducted extensive experiments to evaluate the impact of five factors on the performances of image\nretrieval and analysed their particular impacts. Based on the insights gained from these experiments,\nwe have proposed a new multi-scale image representation which shows superior performances over\nprevious methods on four datasets. When combined with the technique \u201clayer ensemble\u201d, our\nmethod can achieve further improvements. Overall, we have provided a viable and efficient solution\nto apply CNNs in an unsupervised way to datasets with a relatively small number of images.\nR. Arandjelovi\u00e9 and A. Zisserman. Three things everyone should know to improve object retrieval. In Computer\nVision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2911-2918, June 2012. doi: 10.\n1109/CVPR.2012.6248018.\n\nR. Arandjelovi\u00e9 and A. Zisserman. All about vlad. In Computer Vision and Pattern Recognition (CVPR), 2013\nIEEE Conference on, pp. 1578-1585, June 2013. doi: 10.1109/CVPR.2013.207.\n\nR. Arandjelovi\u00e9, P. Gronat, A. Torii, T. Pajdla, and J. Sivic. NetVLAD: CNN architecture for weakly supervised\nplace recognition. In EEE Conference on Computer Vision and Pattern Recognition, 2016.\n\nHossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. From generic to\n\nations for visual recognition. CoRR, abs/1406.5774, 2014. URL\n\nspecific deep rey\nArtem Babenko and Victor Lempitsky. Aggregating local deep features for image retrieval. In The IEE.\nInternational Conference on Computer Vision (ICCV), December 2015.\nRoss Girshick. Fast r-cnn. In International Conference on Computer Vision (ICCV), 2015.\nYunchao Gong, Liwei Wang, Ruiqi Guo, and Svetlana Lazebnik. Multi-scale Orderless Pooling of Deer\nConvolutional Activation Features, pp. 392-407. Springer International Publishing, Cham, 2014. ISBN\n\n978-3-319-10584-0. doi: 10.1007/978-3-319-10584-0_26. URL http: //dx.doi.org/10.1007/\n\n6\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv\npreprint arXiv: 1512.03385, 2015.\nH. J\u00e9gou and A. Zisserman. Triangulation embedding and democratic aggregation for image search. In 20/:\nIEEE Conference on Computer Vision and Pattern Recognition, pp. 3310-3317, June 2014. doi: 10.1106\nCVPR.2014.417.\n\nH. J\u00e9gou, M. Douze, C. Schmid, and P. P\u00e9rez. Aggregating local descriptors into a compact image representa\ntion. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 3304-3311, Jun\n9010. doi: 10.1109/C VPR .2010.5540039.\nHe Kaiming, Zhang Xiangyu, Ren Shaoqing, and Jian Sun. Spatial pyramid pooling in deep convolutional\nnetworks for visual recognition. In European Conference on Computer Vision, 2014.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural\nnetworks. In Advances in neural information processing systems. pp. 1097\u20141105. 2012.\nWei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexan-\nder C. Berg. SSD: Single shot multibox detector. arXiv preprint arXiv: 1512.02325, 2015.\njonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation\nIn Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 343 1\u20143440. 2015\nDavid G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Comput\n1-110, 2004. ISSN 1573-1405. doi: 10.1023/B:VISI.0000029664.99615.94. URL|h\norag/10.1023/B:VISIL.000002966 9615.94\nVision and Pattern Recognition (CVPR), volume 2, pp. 2161-2168, June 2006.\n\nJ. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Lost in quantization: Improving particular object\nretrieval in large scale image databases. In Computer Vision and Pattern Recognition, 2008. CVPR 2008.\nIEEE Conference on. pp. 1-8. June 2008. doi: 10.1109/CVPR.2008.4587635.\nAli Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carls\nconvolutional networks. CoRR, abs/1412.6574, 2014b. URL htt\n\nal instance retrieval with deep\norg/abs/1412.6574\naaa a TE A A i ane |\n\nOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej\nKarpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale\nVisual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015.\ndoi: 10.1007/s11263-015-0816-y.\nRoss Girshick Jian Sun Shaoqing Ren, Kaiming He. Faster R-CNN: Towards real-time object detection with\nregion proposal networks. arXiv preprint arXiv:1506.01497, 2015.\nGaurav Sharma and Bernt Schiele. Scalable nonlinear embeddings for semantic category-based image retrieval.\nIn ICCV, 2015.\nJosef Sivic and Andrew Zisserman. Video google: A text retrieval approach to object matching in videos. Ir\nComputer Vision, 2003. Proceedings. Ninth IEEE International Conference on, pp. 1470-1477. IEEE, 2003\nC. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Ra-\nbinovich. Going deeper with convolutions. In 20/5 IEEE Conference on Computer Vision and Pattern\nRecognition (CVPR), pp. 1-9, June 2015. doi: 10.1109/CVPR.2015.7298594.\n\nG. Tolias, R. Sicre, and H. J\u00e9gou. Particular object retrieval with integral max-pooling of CNN activations.\nArXiv e-prints, November 2015.\nMatthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Compute\nvision\u2014ECCV 2014, pp. 818-833. Springer, 2014."}, {"section_index": "7", "section_name": "APPENDIX A THE NETWORK TRANSFORMATIONS", "section_text": "In order for the network to process images of varying sizes, We change the layer fc6, fe7 and fc'\nfrom the original model to fe6-conv, fe7-conv and fc8-conv. It should be noted there are certai\nconstraints on the input image size due to the network\u2019s inherent design. The original networ!\naccepts an image of fixed size (224 x 224), so the output feature maps of the last convolutional laye\nconv5_4 is of size 512 x 7 x 7. As a result, when we change the operation between layer convS_/\nand fc6 from inner product to convolution, each filter bank kernel between conv5_4 and fc6-con\nhas size 7 x 7. This in turn means that if we are to extract features from layer fc6-conv and above\nthe minimum size of an input image must equal to or be greater than 224. For output feature map\nof layer conv5_4 and below, there are no restrictions on the input image size. During the experiment\nwhen we are extracting features from layer fe6-conv and above, the minimum size of an image is se\nto be 224 if it is less than 224.\nIn this paper, the overlaps between different regions occur in the 3 and 4 scale pyramid. A sing]\nregion in each scale can be specified as the combination of a slice from the the width and heigh\nof the feature map. If a scale has N x N regions, then the number of slices in width and heigh\nof the feature map are both N. We use the same set of slices for both the width and height in thi\nexperiment.\nIn 3 scale (see Table] (b3)), overlap occurs only in scale 2, and the slice (in the proportion to the\nlength of feature map width or height: {(0, 3). (F: 1)}. In 4 scale v1 (Table[3](c1)-(c3)), the slices\nfor scale 2 and 3 are {(0, 7), (4, 1)} and {(0, $), (4, $), (G, D}. In 4 scale v2 (Table [3] (c4)(c5)),\nthe slices for scale 2 and 3 are {(0, 2), (2, 1)} and {(0, 2) 4), (2, 1)}. In 4 scale v3 (Table|3]\n(c6)\u2014(c8)), the slices are {(0, 4), (2, 1)} and {(0, 3), (4, 4) 1)}, for scale 2 and 3, respectively.\n\n1\n5?\n3\nBs Bs\n\n5"}]
rJJRDvcex
[{"section_index": "0", "section_name": "LAYER RECURRENT NEURAL NETWORKS", "section_text": "Weidi Xie, Alison Noble & Andrew Zisserman\nDepartment of Engineering Science, University of Oxford, UK\nIn this paper, we propose a Layer-RNN (L-RNN) module that is able to learn\ncontextual information adaptively using within-layer recurrence. Our contribu-\ntions are three-fold: (i) we propose a hybrid neural network architecture that in-\nterleaves traditional convolutional layers with L-RNN module for learning long-\nrange dependencies at multiple levels; (ii) we show that a L-RNN module can be\nseamlessly inserted into any convolutional layer of a pre-trained CNN, and the\nentire network then fine-tuned, leading to a boost in performance; (iii) we report\nexperiments on the CIFAR-10 classification task, showing that a network with\ninterleaved convolutional layers and L-RNN modules, achieves comparable re-\nsults (5.39% top] error) using only 15 layers and fewer parameters to ResNet-164\n(5.46%); and on the PASCAL VOC2012 semantic segmentation task, we show\nthat the performance of a pre-trained FCN network can be boosted by 5% (mean\nIOU) by simply inserting Layer-RNNs."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In this paper we introduce an alternative \u2018module\u2019 for learning multi-scale spatial contextual infor-\nmation by using Recurrent Neural Networks (RNNs) within layers. This approach is inspired by\nthe ReNet architecture of [Visin et al.| (2015), which we extend here into a hybrid architecture that\ninterleaves traditional convolutional neural network (CNN) modules with layer recurrent modules,\nand we term a Layer Recurrent Neural Network (L-RNN). A L-RNN module is a combination of\n1D RNNs, and is able to learn contextual information adaptively, with the effective receptive field\nable to reach across the entire feature map or image, if that is required for the task. The hybrid\nnetwork combines the best of both worlds: canonical CNNs are composed of filters that are efficient\nin capturing features in a local region, whilst the L-RNNs are able to learn /ong-range dependencies\nacross a layer efficiently with only a small number of parameters.\nWe describe the basic L-RNN module in Section [2] and discuss different\nhybrid architecture by incorporating L-RNN into residual blocks in Section 3\nIn addition, in Section [4 we explain how L-RNN modules can be inserted into pre-trained CNN:\nseamlessly. This means that the entire network does not have to be trained from scratch, only the\nadded L-RNNs are fine-tuned together with pre-trained networks, and the experiments show that thi:\naddition always improves performance. In Section[5] we experiment on the Ro 10 classificatior\nwith the hybrid networks of increasing depths, by using Layer Normalization (\nwe are able to train vanilla RNNs to match the performance of GRU (Chung et = 3015), while\n\nion choices for the"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In computer vision tasks, such as image classification or pixel level prediction, multi-scale contex-\ntual information plays a very important role in achieving high performance. The original architec-\ntures for these tasks (e.g./He et al. ;|Krizhevsky et al. ;|Long et al. 2015); Ronneberger\net al. (2015); Simonyan & Zisserman On SP Se aa (2015)) were able to obtain multi-scale\ncontext with a large spatial footprint by the combination of filters through the layers of the network,\nso that a large receptive field was effectively built up. Indeed, the final layers of these networks\nuse average pooling or fully connected layers (convolution with a large kernel) so that the effec-\ntive receptive field covers the entire input image patch. More recent pixel prediction architectures\nhave used dilated convolutions (Yu & Koltun] {2016} {Chen et al.}/2016) which are able to aggregate\nmulti-scale contextual information without losing resolution (due to the spatial pooling and strides\nin the original architectures), and without incurring the penalty of having to learn many parameters\nfor convolutions with very large kernels.\nIt is worth noting that (broadly) recurrence can be used in feed-forward multi-layer convolutional\nneural network architectures in two ways: between layers, and within layers. For example, between-\n\nlayer recurrence was used for scene labelling in (Liang et al.|/2015}|Pinheiro & Collobert]/2014) with\nm diffe\n\nconvolutions applied recursively on top of feature maps fro erent layers or raw input images.\n\nAnd in (Zheng et al.|!2015), spatial dependencies are modelled explicitly for semantic segmentation\nwith densely connected Gaussian CRFs by iterated application of bilateral filtering using between-\n\nlayer recurrence.\nThe architecture of the network (Figure[I} is composed of two parts. Local features are calculated\nby the low-level CNNs module, the Layer-RNN (L-RNN) module, consisting of several 1D spatial\nRNNs is applied to capture the spatial dependencies. By scanning across the feature maps in differ-\nent directions, the complete L-RNN is able to learn the receptive field in an adaptive way, up to the\nsize of the entire image. These two modules can be combined to build networks in various ways:\nfor example, an L-RNN module can be stacked on top of several CNN modules at the final layer, o1\nCNN and L-RNN modules can be interleaved at multiple levels.\nCNN Module\n\nLayer-RNN Module\n\nSpatial Recurrent Module\n\n/\n\\\n\nSpatial Recurrent Module\nAs shown in Figure] the Layer-RNN (L-RNN) module is a combination of the 1D spatial recurrent\nmodules (B) and (C). In each module, there are two 1D RNNs scanning across the feature maps\nhorizontally or vertically from two directions (bidirectional spatial RNNs), and their hidden states\nare updated at every spatial step. Consequently, for each of the horizontal and vertical directions,\ntwo output feature maps are obtained with the same width and height as the input feature maps. In\nour implementation, we simply sum up these output feature maps (an alternative is to concatenate\nthe output feature maps, but that would increase the number of parameters).\nMore formally, assume the feature maps (layer L) coming into the L-RNN module are X\" \u20ac\nR\u2122*\"*4 and output X/*+1 (layer L + 1), where m,n, d refers to the width, height, and the number\nof feature maps respectively for the input layer. For simplicity, assume the input to the 1D spatial\nBy contrast, our Layer-RNN architecture falls into the second category, where within-layer recur-\nrence is used to capture dependencies. Others have learnt contextual information from within layer\nrecurrence for tasks such as object detection (Bell et al.||2016), and low-level vision problems, such\nas de-noising, colourization and smoothing ( . We postpone discussing in detail the\nrelationships of the proposed Layer-RNN modules t se architectures, and to that of ReNet (\n) and ReSeg 6), until we have introduced the L-RNN in Section|2]\nIn (B), two 1D spatial RNNs are applied to\nscan along each row independently from dif-\nferent directions, hidden states are calculated\nat every spatial step, and the output feature\nmaps can either be concatenated or summed\nup. The receptive field for the black pixel in\n(B) is labelled in orange;\nIn (C), two 1D spatial RNNs are applied to\nscan along each column from two directions.\nThe combination of (B) and (C) defines the\nL-RNN module that is able to propagate in-\nformation over the entire image.\nipl = fat, +Valf, +) left to ri;"}, {"section_index": "3", "section_name": "2.2 DISCUSSION AND RELATION TO OTHER WORK", "section_text": "As can be seen in Figure IC, the effective receptive field can cover the entire image. However, th\nactual receptive field depends on the parameters of the RNNs, and can be learnt adaptively. As a\ninsight to what is learnt, consider a separable filter, such as an axis aligned 2D Gaussian. Such filter\ncan be applied exactly by a composition of 1D Gaussian convolutions in the horizontal and vertica\ndirections. The 1D spatial RNNs can approximate finite 1D convolutions of this type.\nWe next discuss the relation of the L-RNN to prior work. First, ReNets (Visin et al.| 2015), which\nis an architecture completely made of 1D RNNs (i.e. no CNNs). In ReNets, the input images are\nfirst split into non-overlapping patches of size m x n x d, where m,n,d refer to width, height\nand feature channels respectively. The 1D RNNs takes the flattened patch (mn x d) as input, and\noutputs feature vector of size D x 1, where D refers to the number of nodes used in the RNNs. In\ncontrast, we interleave the L-RNN and CNN modules. There are two benefits of this: first, CNNs are\nmore efficient at capturing local features than RNNs, the L-RNN stacked upon them is able to learn\ndependencies between local features (rather than the input channel reformatted); second, we are able\nto introduce more non-linearities between the hierarchical layers (through the convolutional+ReLU\nand pooling layers), and a RNN provides non-linearities within the same layer.\nThe 2D-RNN, proposed in (Graves & Schmidhuber| 2009) Theis & Bethge| |2015), is able to scan\nacross the image or feature maps row-by-row, or column-by-column sequentially, with each RNN\nnode accept input from three sources, namely, projections of current input, and feedbacks from the\ntwo neighbour nodes. By contrast, we use unidirectional 1D spatial RNNs, with each hidden node\n\nonly accepting feedbacks from its previous node. Another advantage of our model is that rows or\ncolumns can be processed in parallel on GPUs, and training time is shortened.\n[Bell et al.| (2016) (Inside-Outside Net) and[Visin et al.| (2016) (ReSeg) describe similar ideas for\nobject detection and semantic segmentation. Both architectures follow a pipeline that consists of a\nCNN feature extractor (VGG Net) followed by spatial RNNs at the final prediction stage. In contrast,\nwe treat the L-RNN module as a general computational layer, that can be inserted into any layer of\nmodern architectures, and interleaved with CNN modules. This enables a network to be capable of\nlearning contextual information in a flexible way at multiple levels, rather than with hand-crafted\nkernel sizes and receptive fields.\nNote that the vanilla RNN unit consists of two terms, a local term and a recurrence term, where\nthe local term is exactly the convolution operation. Therefore, the spatial RNN can be seen as a\ngeneralisation of the convolutional layer, and in the worst case, when the RNN learns no context.\nthe layer simply becomes a convolutional one. For tasks with limited data (semantic segmentation\nin our case), we propose a regime for inserting the L-RNN into the pre-trained FCN and fine-tuning\nthe entire network end-to-end. This means that we directly increase the representational power of\nthe model, and set the pre-trained model free to learn contextual information if it is needed.\nRNNs from X~ is a feature vector at each spatial location, each row or column on the feature maps\nis treated as one sequence. When scanning from left to right, the feature responses for location 77\ncan be calculated:\nWhere #75! = 0,0; \u20ac RO, af}! aft) \u00a9 ROX, U \u00a9 ROX, V \u20ac ROX, be ROX, D\ndenotes the number of nodes used in the 1D spatial RNN, and f refers to the non-linearity function.\n1D spatial RNNs scanning other directions can be calculated similarly. Notice that, the first term of\nequation|Tencodes local information independently, resembling the normal convolutional layer, and\nthe second term characterizes the within-layer recurrence (U is a convolution matrix, V a recurrence\n\nmatrix). We make use of this observation in Section|4]\nIn this section, we describe the architecture for incorporating 1D spatial RNNs into the computa-\ntional block of a Residual Networks(He et al.| [2016b), and also discuss fusion methods for such\nblocks.\nWe start with the standard residual block of 016b) (Figure [2{a)), and then replace th\nncluded CNN layer with bidirectional spatial RNNs, to includ a L-RNN module instead.\na\n\nReLU\n\nConv (Linear)\n\nBN\n\nReLU\n\nConv (Linear)\n\nForward /\nsum /\nConcatenate\n\n(a) CNN module (b) L-RNN module\nX'+! _(x\", F(X\"\u201d,W)] () refers to concatenatio\nTherefore, the channels of output feature maps will be the sum of the channels of the two concate\nnated layers (the number of parameters will be increased for the next layers). In the experimenta\nevaluation of Section|5.I]we compare these options."}, {"section_index": "4", "section_name": "4. ADDING A LAYER-RNN TO A PRE-TRAINED CNN", "section_text": "In this section, we describe how a Layer-RNN module, can be seamlessly inserted into a pre-traine\nCNN. In a typical scenario, the CNN would be trained for classification on ImageNet (where ther\nare copious annotations). After inserting the L-RNN modules, the hybrid L-RNN network can the\nbe fine tuned for a new task such as pixel-level prediction, e.g. semantic segmentation (where th\nannotated data is usually more limited). This trick naturally allows multi-level contextual informa\ntion to be effortlessly incorporated. Avoiding training the network from scratch means the entir\nnetwork can be re-purposed with the available annotations and trained end-to-end for the new task\nwhilst benefiting from the earlier classification training.\nWe illustrate the idea using 1D convolution, but the same principles hold for the entire L-RNN\nmodule. As shown in Figure] the canonical CNN architecture for a 1D convolution can be denoted\nac:\nWe consider three fusion options for combining the features from such blocks with the input to\nsubsequent layers; namely forward, sum and concatenation. Forward refers to the traditional feed-\nforward architectures:\nxttl =\n= F(X*,W)\ni.e. the block simply becomes a new layer; sum denotes the method of the original residual networks:\nXetl\nXP\nF\n(X*,W\n/)\nso that the L-RNN module acts as a residual block; whilst, in concatenation, features from multiple\nlayers (same spatial sizes) are concatenated:\nX'+1 _ [x\". F(X\", W)] () refers to concatenation\nXitl\nf(W * X* +b)\nX, X, XE+) +b)\nVX\nby\na\n*\n\nTl t(U\n+ =\nfa\nwhere U, V, b refer to the parameters that are shared across the whole scan-line.\nNotice that the 1D spatial RNN are designed to incorporate two terms, projections from local region\n(input-to-hidden) and recurrence term from previous hidden unit (hidden-to-hidden). In fact, it is\nXb\n\nXE+1 = pcxintery\n\nxinter\n\nxinter\n\nxt\n\nConvolutional Neural Networks\n(CNNs)\n\nSpatial Recurrent Neural Networks\n(Spatial RNNs)\nXb\n\nXE+1 = pcxintery\n\nxinter\n\nXt\n\nConvolutional Neural Networks Spatial Recurrent Neural Networl\nte eee ONIN ND\nSpatial RNNs can be re-expressed as a two-step process, CNNs (Local features) + Recurrence.\nThe similarity between CNNs and spatial RNNs is highlighted by the yellow box.\nThe difference between CNNs and spatial RNNs is shown in blue box and arrow.\nthe presence of non-zero recurrence matrix V, that characterizes the 1D spatial RNN, and they can\nbe calculated in a two-step way as:\n5 = SX\") \u00ab(i= I, zero initial state\n\ni\nxX}ptl = f(xinter +4 VXit}) (i S 1)\n1)\ni>\n\na\nbth) (\n\nXi\n\nter + Vv.\n\nin\nXxX;\n\nf(\n\n+1\nxt\nBy interpreting the recurrence in this way, 1D spatial RNNs can be constructed by inserting recur-\nrence directly into any convolutional layer right after the convolution. If the recurrence matrix V is\ninitialized as zero, and ReLU is the activation function, then the 1D spatial RNN will be initialized\nexactly as the pre-trained CNNs. The complete L-RNN can be constructed by inserting two bidirec-\ntional spatial RNNs into subsequent layers of the pre-trained CNNs. We derive the expression of the\nwithin-layer gradient for use in back-prop fine-tuning in Appendix|B]\nWe test the proposed Layer-RNN on two supervised learning tasks: CIFAR-10 classification i\nSection|5.1} and PASCAL VOC 2012 segmentation in Section|5.2]"}, {"section_index": "5", "section_name": "5.1 IMAGE CLASSIFICATION", "section_text": "In this section, we investigate classification performance under variations in an architecture contain-\ning L-RNN modules. We vary the depth of the network, the number and position of the L-RNN\nmodules, the type of recurrent units in RNNs, the pooling mechanisms for the last pooling layer, and\nthe method of fusing the block outputs.\nThere are two principal architectural variations. The first variation is that from Network A to D,\nwe gradually increase the network depth by adding CNN Modules, with the L-RNN module always\nstacked at the final stage to capture global information over the entire image, in a similar manner\n\nto the fully connected layers or average pooling in other networks. Network A has 5 convolutional\nlayers.\nThe second principal variation, in Network E and F, is to interleave CNN and L-RNN modules.\nThis means that the network is capable of learning representations across large spatial footprints\nat any stage in the network. To show the effectiveness of adding L-RNN modules, we include <\nBaseline-CNN composed of only convolutional layers (7 layers, with concatenation used at every\nskip layer). Network E is built upon the Baseline-CNN by inserting L-RNN modules before CNN\nmodules at multiple stages. To make sure the performance gain is not from the increased number o!\nparameters, we cut down the number of filters in the last CNN module to 128 (this number is 256 in\nthe Baseline-CNN). Network F, uses more convolutional layers interleaved with L-RNN modules.\nxinter \u2014 7 % X\" 4b (Convolution )\nxi = f(xir'er) (i=1, zero initial states )\n\nXEt1 = p(ximer 4 VXE*1) (> 1)\nTable 1: Network architectures for CIFAR-10 experiments\n\nIn Network A, a variety of selections are tested (coded as blue). In Feature Fusion, we may choose\nForward, Sum, Concatenation; in the LRNN module, GRU and vanilla RNNs are tested; max pool-\ning or average pooling can be used for global pooling.\n\nFrom Network A to D, the depth of networks is gradually increased by adding CNN modules, for\nexample, comparing C to B, two more CNN modules are added based on B (coded as red). Com-\nparing Networks E and F with the the Baseline-CNN, LRNN modules (green) are interleaved with\nCNN modules.\nOther variations of architectures include: firstly, we may use Forward, Sum, Concatenation to fus\u00a2\nfeatures; secondly, GRU and vanilla RNN units are compared for the L-RNN modules, ReLU i:\nused for both cases as the non-linear activation; thirdly, both max pooling and average pooling are\ntested as global pooling. For clarity, we name the networks by these variations in Table [2] Wher\nForward is selected to fuse features, Network A-Forward simply follows the traditional CNN witt\npure feed-forward layers. A-Concat uses concatenation as an alternative, and A-Sum follows the\nidea of residual networks proposed in 2016b), the number of filters is gradually increasec\nas the networks get deeper. To match dimensions for summation, 1 x 1 convolution is used in A-Sum\nIn our experiments, we found that concatenation works better than sum (Table[2). Therefore, in al\nBaseline-CNN x B Cc D E F\ninput (32 x 32 x 3)\nConvolution (3 x 3 X 64)\nCNN Module\n(3 x 3 x 64)\nForward\nCNN Module\nCNN Module | @ % 3x 60) CNN Module\nCNN Module | \u201c&*3* 6% | ENN Module @ x 3 x 64)\n(e364) Forward (x3 x 64) Concatenate\nCNN Module | CNN Module Rorward CNN Module Concatenate CNN Module | CNN Module\n(3x 3x64) | @x3x64) | QNModue | @X3*x6 | QNMoie | 3X3 64) | Gx 3x 64)\nConeatenate | Feature Fusion | \u00b03 G4, Forward Gx 3x 128) | Concatenate Concatenate\nCNN Module CNN Module\nConcatenate Forward\n(3 x 3 x 64) (3 x 3 x 64)\nConcatenate CNN Module Concatenate\n(3 x 3 x 128)\nForward\nCNN Module\n(3 x 3 x 128)\nConcatenate\nMaxPooling @)\nLRNN Module\nCNN Module | CNN Module (128)\n(3x 3x 128) | (8x 3 x 128) Forward\nSes) Forward Forward rene CNN Module\nCNN Module | CNN Module Nerwand CNN Module | CNN Module one (3 x 3 x 64)\n(3 x3 x 128) | (x 3x 128) | GAN Modu | GX 3x 128) | Gx3x 128) | GAN Medute Concatenate\nConeatenate | Feature Fusion | (3 V3 Oo og) Forward Forward (3x 3x 128) | ERNNModule\nConcatenate CNN Module CNN Module Concatenate (128)\n(3x 3x 128) | (3 x 3 x 128) Forward\nConcatenate Concatenate CNN Module\n(3 x 3 x 64)\nConcatenate\nMaxPooling @)\nLRNN Module\n(128)\nForward\nrene CNN Module\nCNN Module | LRNN Module | LRNN Module | LRNN Module | LRNN Module one (3 x 3 x 64)\n(3 x 3 x 256) (256) (256) (256) (256) CNN Madute Concatenate\nConcatenate | Feature Fusion | Concatenate Concatenate Concatenate LRNN Module\n(3 x 3 x 128) (128)\nConcatenate\nForward\nCNN Module\n(3 x 3 x 64)\nConcatenate\n\nGlobal Pooling (8)\n\nDropout (0.5)\n\nSoftmax (10)\nother architectures (B,C,D), as we gradually increase the network depth by adding CNN module:\nwe fuse the skip layers by only alternating between concatenation and forward.\nFollowing the VGG-net (Simonyan & Zisserman 2015), in all architectures, convolutional kernels\nin the CNN Module are of size 3 x 3. Maxpoolings (2 x 2) are used as intermediate pooling, and\n8 x 8 global poolings (average or max) are applied at the end. To avoid overfitting, we use dropout\n(0.5). Training details and recurrent units are described in the Appendix [A] Implementations are\n\nmostly based in Theano (Theano Development Team||2016) with single NVIDIA Titan X.\nDataset & Evaluation. We conducted experiments on the CIFAR-10 dataset, which consists of\n40k training images, 10k validation and 10k testing images in 10 classes, and each of the image is\nof 32 x 32 pixels with RGB channels. We augment the training data with simple transformations\n(rotation, flipping, scaling) on the fly. The mean image over the whole training set is subtracted from\neach image during training. Following the standard evaluation protocol, we report the top/ error on\nthe testing set.\nResults & Discussion. We present detailed comparisons with other published methods in Table|2\nTable 2: Comparison with previous published methods on CIFAR-10\n\nThe networks are named by the chosen operation at every step; for instance, A-Forward-GRU-Max\nrefers to the architecture A with Forward feature fusion, GRU in L-RNN Module, and max pooling\nas the final global pooling.\nFrom the experimental results, we can draw the following conclusions\nIn our experiments for shallow networks, the summing of residual connections shows no bene-\nfit compared to feed-forward or concatenation. This observation is made from the results by A-\nForward-GRU-Max (7.57%), A-Concat-GRU-Max (7.35%) and A-Sum-GRU-Max (7.69%). Thus,\nas also employed in U-Net or DenseNet (Ronneberger et al.|/2015} [Huang et al.|{2016), concatena-\n\ntion can be used as an alternative to summation in building deeper networks.\nCIFAR-10 # Params | #Conv Layers | Approx. Time/ Epoch (s) | Top1 Error(%)\n\nReNet qWisin et al 2015) = 0 = 12.35\nE = = = 8.81\n\n2.5M 19 = 8.39\n\n2.3M 19 = 7.54\n\n1.7M 110 = 6.61\n\n1.7M 164 = 5.46\n\n27.2M 100 = 3.74\n\nBaseline-CNN-Avg 56M 7 331 9.07\nBaseline-CNN-Max 56M 7 331 8.48\nA-Concat-RNN-Avg 0.9M 5 293 7.65\nA-Concat-RNN-Max 0.9M 5 293 7.43\nA-Forward-GRU-Max .68M 5 315 TST\nA-Concat-GRU-Max 95M 5 377 7.35\nA-Sum-GRU-Max 99M 5 383 7.69\nB-GRU-Max 2.3M 9 542 6.62\nB-RNN-Max .27M 9 483 6.78\n\nC (GRU-Max) 2.5M 13 726 6.21\n\nD (GRU-Max) 3M 19 1321 5.73\n\nE (RNN-Max) 0.97M 7 462 5.96\n\nF (RNN-Max) 55M 15 394 5.39\n\n(Tensorflow on 2 GPUs)\nComparison of basic choices. Max pooling consistently performs better when used as the global\npooling in our case, this is seen in the results by Baseline-CNN-Avg (9.07%) vs. Baseline-CNN-\nMax (8.48%), and A-Concat-RNN-Avg (7.65%) vs. A-Concat-RNN-Max (7.43%). One possible\nexplanation would be that for classification tasks, decisions are based on the most salient features.\nIt can be seen that vanilla RNN units trained with Layer Normalization can perforn\nalmost as well as GRU, while saving a a large number of parameters (by comparing the results fron\nA-Concat-RNN-Max with 0.9 parameters (7.43%) and that of A-Concat-GRU-Max with 1.95M\nparameters (7.36%), B-RNN-Max with 1.27M parameters (6.78%) vs. B-GRU-Max with 2.3M\nparameters (6.62%)).\nNetworks with L-RNN module stacked at the final stage. Even shallow networks with L-\nRNN modules (architectures A) can achieve comparable or superior performance to deep archi-\ntectures with 19 layers that requires more parameters (e.g. Network A-Concat-RNN-Max (0.9)\n\nvs. Highway(2.3/)). This confirms that when a L-RNN module is stacked on top of CNNs, it is\nable to capture global information, avoiding the multiple layer route to increasing receptive fields in\n\nstandard architectures, e.g. in (Romero et al.|{2014}|Srivastava et al.|/2015p.\nAs expected, networks can always improve classification performance by adding more CNN mod-\nules (going from architecture A to D). Network D with 19 convolutional layers performs better than\nthe ResNet-110 (by 0.3% top] error), (though Network D has more parameters than the ResNet-\n110) and is slightly worse than ResNet-164 (by 0.25% top! error). Thus, following this trend, it is\nreasonable to expect a benefit if L-RNN Modules are combined with very deep networks, like the\nresidual variants.\nNetworks with L-RNN modules interleaved with CNN modules. Comparing the performance\nof Baseline-CNN-Max (8.48%) with that of Network E (5.96%), there is a significant performanc\u00ab\nboost (2.5%), brought by simply inserting L-RNN modules. Network E also has other advantage:\nover the networks A to D: the number of parameters, network depth, and running time. Further.\nmore, when we continue increasing the network depth and interleaving L-RNN modules, Network I\nachieves comparable results (5.39%) to ResNet-164 (5.46%) and with fewer parameters (1.55\nvs. 1.7). This confirms that, firstly, L-RNN modules can be combined with very deep networks\nand secondly, rather than hand-craft the kernel size, we should set the model free and learn contex.\ntual information at any stage.\nIn this section, we insert L-RNN modules into the VGG-16 networks (pre-trained on Ima\ngeNet (Deng et al.||2009)), and fine-tune the entire network for the PASCAL VOC 2012 segmenta\ntion task. The objective is to boost the segmentation performance by providing contextual informa\ntion via the L-RNNs. In particular, we consider the two FCN segmentation architectures originall\nintroduced by{Long et al.|(2015), FCN-32s and FCN-8s; these are described below.\nWe proceed in three steps: first, we establish baselines by training our own FCN-32s and FCN-8s\n(Appendix|C}, and comparing their performance to those of . We also investigate\nthe loss in performance as the fully connected (FC) layer is gradually reduced from 4096 to 512\nchannels. The reason for doing this is that when we insert the L-RNN module, its complexity\n(dimension of the hidden units) depends on this number of channels, and so the overall complexity\ncan be varied. In the second step, we insert L-RNNs into the FCN-32s architecture and evaluate the\nchange in performance. Finally, we insert L-RNNs into the FCN-8s architecture and compare with\nprevious published methods.\nDataset & Evaluation. We used a training set consisted of VOC2012 training data (1464 images\nprovided by the challenge organizers), and augmented with training and validation data from [Har.\niharan et al.|(2014), which further extend the training set to a total of 11,685 images with pixel-\nlevel annotation. After removing the overlapping images between VOC2012 validation data and\nthis dataset, we are left with 346 images from the original VOC2012 validation set to validate our\nmodel. In all the following experiments, we use a single scale for the input images (384 x 384).\nand only horizontal flipping is used for data augmentation. The performance is measured in terms\nof pixel intersection-over-union (IOU) averaged across the 21 classes.\nArchitecture & Training. In the FCN-32s, input images are passed through the whole networks\nand end up with predictions of 12 x 12 x 21, then, up-sampling layers are directly used to may\nthe predictions back to 384 x 384 (32 times). In the FCN-16s, instead of directly up-sampling 3:\ntimes, the predictions are first up-sampled by 2, and summed up with stream predictions from pool\n(named after VGG16), then up-sampled by 16 times. In the FCN-8s, the stream predictions fron\npool3 are further added to the results from FCN-16s, thus, up-sampling layers with only factor 8 i:\nneeded. (Appendix|C)\nFor all the architectures, the base net(VGG16) is pre-trained on ImageNet (Deng et al.| (2009p, we\nfurther train on Pascal VOC2012 for 50 epochs, similar to the experiment for CIFAR-10, we iter-\natively increase or decrease the learning rate between 10~\u00b0 and 10-5 after every 10 epochs. The\n4096 channel architectures are trained first, and then the number of channels is gradually reduced in\nthe FC layer by randomly cutting them (e.g. from 4096 to 2048), and re-training the networks.\nResults & Discussion. Table[3] shows the performance of the six baselines: FCN-32s and FCN\n8s with the number of channels varying from 512 to 4096. We observe that reducing the nodes it\nthe FC layers does produce a performance drop (from 4096 to 1024 nodes, 1% mean IOU) in bot!\nFCN-32s and FCN-8s. Although from 1024 to 4096 nodes, the improvement is tiny, the differenc\nin the number of parameters is over 64 million. Consequently, in the following experiments w\nchoose to perform experiments based on networks with 512, 1024 or 2048 channels only (i.e. no\n4096). In comparison to the original performance for the FCN-8s architecture in 2015)\nwe exceed this (by 64.4 to 61.3 mean IOU) in our training. Thus, we use our trained networks as ;\nbaseline."}, {"section_index": "6", "section_name": "5.2.2 FCN-32S WITH L-RNN MODULES", "section_text": "Architecture & Training. The architecture FCN-32s(L-RNN) is shown in figure the convolu-\ntional part of the architecture is initialized with the pre-trained FCN-32s(2048 channels in FC layer)\nbaseline. Then, two 1D spatial RNNs are inserted into the fel layer in the horizontal direction, and\ntwo 1D spatial RNNs are inserted into the fc2 layer in the vertical direction. The convolution activa-\ntions of fcl are shared for both left-right and right-left scanning. Similarly for fc2, the convolution\nactivations are shared for up-down and down-up scanning. Thus the fel and fce2 layers together with\nthe added 1D spatial RNNs form a complete L-RNN module.\nDuring training, as described in section{4] the 1D spatial RNNs are initialized with a zero recurrence\nmatrix. The entire network is then fine-tuned end-to-end with the PASCAL VOC2012 data. We\n\nadopt RMS-prop (Tieleman & Hinton] |2012) for 30 epochs with hyper-parameters Ir = 107+,\no = 0.9,\u20ac = 10~8, then decrease the learning rate to Ir = 10~\u00b0 for 10 epochs.\nResults & Discussion. The results are shown in Table|3} Compare the 32s rows with and withou\nthe L-RNN for the FC layers with 512, 1024, and 2048 channels. As can be seen, the addition o!\nthe L-RNN always improve the segmentation performance over the pre-trained FCN-32s baselines\nHowever, the improvement is not large \u2014 about 1 \u2014 1.5% mean IOU. This is because the receptive\nfield in the fully connected layers of FCN-32s is sufficiently large to cover 224 x 224 pixels of the\ninput patch, and consequenly the networks are not able to benefit much from the context provided by\nthe L-RNN. The benefit is greater when L-RNNs are added to the lower layers (where the receptive\nfields of the convolutions is much smaller). and we turn to that case next."}, {"section_index": "7", "section_name": "5.2.3 FCN-8S WITH L-RNN MODULES", "section_text": "Architecture & Training. The architecture FCN-8s(L-RNN) is shown in figure /4| as with the\nFCN-32s architecture, 1D spatial RNNs are inserted into the fcl and fce2 layers to form a L-RNN\nmodule. L-RNNs are also inserted into the lower layers, namely pool3 and pool4 layers. Unlike\nthe FC layers in the FCN-32s, where prediction for each central pixel comes from image patches of\nsize 224 x 224, the predictions from pool3 and pool4 are based on receptive field on the image of\nmuch smaller sizes (around 44 x 44 and 100 x 100 pixels respectively). Thus, the inserted L-RNN\nmodules must be able to model relatively long-range dependencies.\nFigure 4: FCN-32s (above the blue dash line) and FCN-8s with L-RNN modules.\n\nSpatial RNNs are inserted to the fully connected (FC) layers in all FCNs, every two FC layer\nconstruct a complete L-RNN module.\n\n{384, 192, 96} indicate the spatial sizes of the feature maps.\n\nKernel Sizes for the fully connected layers (n is an experimental variable\u2014 number of channels) :\nfel: 7x7x5l12x mn , fe2:1x1lx n x ny, fe3:1x1lx n x2i\n\nfrA-e Tv tv RID INDA F\u00a3n&-.- 7. V1 INDIA vy INDA f\u00a3rh-> Tv 1.1 NAIA y 91\nDuring training, the network is initialized from the FCN-8s baseline, and then fine-tuned using\nsegmentation data. Again the PASCAL VOC dataset is used. Furthermore, when comparing to the\nother previously published methods, the network is further trained on the COCO trainval dataset\n\nand we use a densely connected CRF as post-processing (Krhenbhl & Koltun| (2012).\nResults on PASCAL VOC Validation set. The experimental results are shown in Table\nTable 3: Comparison of FCN networks on the PASCAL VOC2012 segmentation validation set.\nComparing the rows for 32s with and without L-RNN, to those for 8s with and without L-RNN. We\ncan draw the following conclusions:\nImprovement due to the skip layers. It can be seen (for IOU) that going from FCN-32s(2048)\nto FCN-8s(2048), where there are additional skip layers, the performance is boosted from 62.7\nto 64.1. The skip layers in the FCN-8s architecture introduce more parameters, but this is not the\nreason for the performance boost since FCN-8s(2048) and FCN-32s(4096), have a similar number of\nparameters though they perform very differently (64.1 vs. 62.9). This observation confirms that the\nperformance gain is brought by the the skip layers, rather than the increased number of parameters.\nImprovement due to L-RNN module. Inserting a L-RNN to the FC layers of FCN-32s(2048),\nonly improves the performance from 62.7 to 64.2. However, as noted earlier, since the nodes in the\nLRNN Module 1\nType # of channels in FC | L-RNNs added | Pixel Acc % | Mean IOU %\n32s 512 NO 90.4 61.5\n32s 1024 NO 90.5 62.1\n32s 2048 NO 90.7 62.7\n32s 4096 NO 90.7 62.9\n8s 1024 NO 91.3 63.8\n8s 2048 NO 91.2 64.1\n8s 4096 NO 91.3 64.4\n\n8s (original (Long et al.|/2015)) 4096 = = 61.3\n5 512 YES 90.8 62.7\n\n32s 1024 YES 90.9 63.4\n32s 2048 YES 91.1 64.2\n8s 2048 YES 92.6 69.1\nFC layers already cover the entire input patch of size 224 x 224, the L-RNN can contribute only\nlittle context here.\nIn contrast, adding L-RNNs to FCN-8s brings a substantial improvement from 64.1(FCN-8s) tc\n69.1(FCN-8s-LRNN). This process will introduce more parameters due to the recurrence term ir\nthe RNNs, but it is clear that the improvement is mainly from the inserted L-RNN module afte:\npool3 and pool4 in FCN-8s, rather than from the increased number of parameters. The reason i:\nthat, when comparing FCN-8s (2048 channels without L-RNN) to FCN-8s (4096 channels withou\nL-RNN), although the number of parameters is increased dramatically, the performance is only\nincreased from 64.1 to 64.4. While FCN-8s (4096 channels without L-RNN) has roughly the same\nnumber of parameters as that of FCN-8s (2048 channels with L-RNN), but the performance gain i:\nfrom 64.4 to 69.1. In conclusion, the L-RNN is able to learn contextual information over a muck\nlarger range than the receptive field of pure local convolutions.\nResults on PASCAL VOC Test set. Table [4] shows the results of the FCN-8s with L-RNNs or\nthe PASCAL VOC test data, and also compares to others who have published on this dataset. The\nperformance is far superior to the original result using a FCN-8s with 409\u00a2\nchannels (whereas only 2048 channels are used here). We also compare to the dilated convolutior\nnetwork of (Yu & Koltun| 2016), obtaining comparable, though slightly better performance. Not\u00ab\nthat in (Yu & Koltun|/2016), multi-scale contextual information is captured by explicitly designing\ndilated convolution kernels, while the L-RNN is able to learn contextual information implicitly\nFinally, we compare to who add a densely connected CRF to FCN-8s. If we\nalso add a dense CRF as post-processing, we boost the performance by 1% in IOU (the same boost as\nobtained by (Yu & Koltun}|2016)). In Figure[5] we show the samples of semantic segmentations or\nMean IOU %\n\nMethods\n\nFCN-8s (\n\nLong et al] 2015}\ns (Zheng ol\n\nP P+CRE | P+COCO | P+COCO+CRF\n62.2 n/a n/a n/a\nn/a 72.0 n/a 74.7\nn/a n/a 73.5 74.7\n71.9 72.7 74.2 75.7\nthe PASCAL VOC2012 validation set. In each figure, we show our predictions and the results afte:\nCRF post-processing. Comparing with the end-to-end trainable CRF-RNN (Zheng et al.| 2015), ow\npredictions miss the small details, like the wheel of the bicycle, but show much better performanc\u00a2\nin determining the class of the segmented regions \u2014 something that context can really contribute to.\nThis paper has shown that the proposed L-RNN module is an alternative way of adding multi-level\nspatial context to a network. In fact, L-RNNs can be interleaved with convolutional layers to learn\ncontext at any stage. When the L-RNN is only used at the final stage after the CNNs, it gives\nshallow networks the receptive fields of far deeper networks. Furthermore, we have demonstrated\nthat inserting L-RNNs can boost the performance of pre-trained networks, and given an initialization\nprocedure that makes this training a simple matter of end-to-end fine tuning.\nThere is much left to investigate using L-RNNs as a new building block, and we suggest some av-\nenues here: (i) training the hybrid architectures on larger dataset, such as ImageNet\n, and learn representations that can be transferred to other vision tasks, (ii) a similar investiga-\nor deep residual networks where the residual blocks are either convolutional or L-RNNs; and\n(iii) including a CRF final layer in end-to-end training.\nCRF-RNN FCN(8s)-LRNN LRNN+CRF \u2014 Ground-truth\nFigure 5: Qualitative Results. First column: input image. Second column: prediction from\nZheng et al.|(2015). Third column: prediction from the our networks. Fourth column: CRF post-\nprocessing. Fifth column: ground-truth annotation.\nBell, Sean, Zitnick, C Lawrence, Bala, Kavita, and Girshick, Ross. Inside-outside net: Detectin:\nobjects in context with skip pooling and recurrent neural networks. CVPR, 2016.\nChung, Junyoung, Gulcehre, Caglar, Cho, Kyunghyun, and Bengio, Yoshua. Gated feedback recut\nrent neural networks. NIJPS, 2015.\nHariharan, Bharath, Arbeldez, Pablo, Girshick, Ross, and Malik, Jitendra. Simultaneous detection\nand segmentation. ECCV, 2014.\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image\nrecognition. CVPR, 2016a.\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Identity mappings in deep residual\nnetworks. ECCV, 2016b.\nHuang, Gao, Liu, Zhuang, and Weinberger, Kilian Q. Densely connected convolutional network:\nhttps://arxiv.org/abs/1608.06993, 2016.\nKrhenbhl, Philipp and Koltun, Vladlen. Efficient inference in fully connected crfs with gaussian\nedge potentials. NJPS, 2012.\nKrizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. ImageNet classification with deep con\nvolutional neural networks. N/PS, 2012.\nLiang, Ming, Hu, Xiaolin, and Zhang, Bo. Convolutional neural networks with intra-layer recurren\nconnections for scene labeling. NJPS, 2015.\nLiu, Sifei, Pan, Jinshan, and Yang, Ming-Hsuan. Learning recursive filters for low-level vision via |\nhybrid neural network. ECCV, 2016.\nLong, Jonathan, Shelhamer, Evan, and Darrell, Trevor. Fully convolutional networks for semantic\nsegmentation. CVPR, 2015.\nPinheiro, Pedro HO and Collobert, Ronan. Recurrent convolutional neural networks for scene label-\ning. ICML, 2014.\nRomero, Adriana, Ballas, Nicolas, Kahou, Samira Ebrahimi, Chassang, Antoine, Gatta, Carlo, anc\nBengio, Yoshua. Fitnets: Hints for thin deep nets. arXiv preprint arXiv: 1412.6550, 2014.\nRonneberger, Olaf, Fischer, Philipp, and Brox, Thomas. U-net: Convolutional networks for biomed-\nical image segmentation. MJCCAI, 2015.\nSimonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image\nrecognition. JCLR, 2015.\nSrivastava, Rupesh K, Greff, Klaus, and Schmidhuber, Jiirgen. Training very deep networks. NIPS,\n2015.\nTheis, Lucas and Bethge, Matthias. Generative image modeling using spatial Istms. NPS, 2015\nTieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running\naverage of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.\nTheano Development Team. Theano: A Python framework for fast computation of mathematical\n\nexpressions. arXiv e-prints, abs/1605.02688, May 2016. URL|http://arxiv.org/abs/\n1605.02688\nVisin, Francesco, Ciccone, Marco, Romero, Adriana, Kastner, Kyle, Cho, Kyunghyun, Ben-\ngio, Yoshua, Matteucci, Matteo, and Courville, Aaron. Reseg: A recurrent neural network-based\nmodel for semantic segmentation. CVPR, 2016.\nZheng, Shuai, Jayasumana, Sadeep, Romera-Paredes, Bernardino, Vineet, Vibhav, Su, Zhizhong,\nDu, Dalong, Huang, Chang, and Torr, Philip HS. Conditional random fields as recurrent neural\nnetworks. ICCV, 2015."}, {"section_index": "8", "section_name": "Appendices", "section_text": "In the Layer-RNN, we test the gated recurrent units (GRU) for the RNN blocks (Chung et al. 2015),\nthe GRU has two gates, namely reset gate r and update gate z. Intuitively, the reset gate determines\nhow to combine the new input with the previous memory, and the update gate defines how much of\nthe previous memory to use, thus, the hidden state s; of the GRU at time t can be computed as :\nz= 0(a,U* + 5,-1W*)\nr= o0(x,U\" + 54_1W\")\nh = f(a,U\" + (s-107r)W\")\n8. = (l\u2014-z)oh+z0 54\nTo simplify the training process and reduce number of parameters, we also test the vanilla RNNs for\nthe RNN blocks with Layer Normalization(Ba et al.| /2016). In a standard RNN, the outputs in the\nrecurrent layer are calculated from the current input x; and the previous hidden states hy_1, which\nare denoted as a, = Ux, + Vh'\u2014!. The layer normalized layer is computed as :\nFe \u00a9 (ay \u2014 pr) +b)\nWhere U is the current input-to-hidden term, and V is the hidden-to-hidden recurrence term, b an\ng are defined as the bias and gain parameters of the same dimension as hy.\nDuring training, we iteratively increase and decrease the learning rate (learning rate restart) betweer\n10-3 and 10~\u00b0 based on the conjecture that (Figure |6), networks tend to get trapped in the regions\nwith small derivatives, such as saddle points or bad local minima [Dauphin et al.| . Tradi-\ntionally, the learning rate is decreased every several epochs, and gradients that are used to update\nparameters depend on both the learning rate and the derivatives w.rt loss functions. At the end ot\ntraining, both of these two terms tend to be very small. Therefore, it becomes difficult for the net-\nworks to escape from these regions. During our training, we restart the learning rate every some\nepochs (we try 60 or 80 in our training), and decrease it gradually.\nSSS\n\nSaddle Point\nz= 0(x,U* + 51W*)\nr= o0(x,U\" + 54_1W\")\n= f(aU\" + (s10r)W\"\n36 = (l-z)oh+z05,41\nSQ\nSMAAAA\n\nSaddle Point\nor\nBad Local Minima\n\nSide View\nFigure 6: Intuitive Loss Surfaces.\nDeep Neural Networks may easily be trapped into saddle point or bad local minima."}, {"section_index": "9", "section_name": "B FINE-TUNING LAYER-RNNS WITH ZERO RECURRENCE MATRIX", "section_text": "[n this section, we derive the procedure for fine-tuning the recurrence matrix, when it is initialized a\nzeros. We will only consider 1D scan-lines of the spatial RNN, and therefore simplify the derivatio:\nio a 1D sequence. Consider the fully connected layer for simplicity, L, L + 1 denote layer, t refer\n0 the index of input, f refers to ReLU, U, V refer to the input-hidden matrix and recurrence matri:\nrespectively.\nXPt! = f(s,)\nAssume \u00a3 denotes the loss function for a specific task. Since V is shared for the whole 1D sequence\n(length denoted by T), the back-propagation within the layer L + 1 can then be derived as:\n0E OB aXe! AXP As\n0V X X axktt ox}! Os, 6V\n\nT t<T\nL+1\nOXEt ONE OXGA OX TT OX V! - diag(f\u2019\na and 7 g(f\nXP) axktlaxktl ax}! axPt\nE E ett bt\na y OB OXE O87 ere OX\n\nOV St axkt! \u201cdsp OV dsr\n\nOsr\nOV\n\nL+!\nXTr}\n0E\nY,=Vo- ony gradient descent at first iteration\nSince Vo is initialized as zero, V, = \u2014a\u00a3. In other words, instead of initializing the recurrenc\u00a2\n0 1 ov:\n\nmatrix V randomly or to be identity matrix, we actually initialize it based on the features in a loca\nneighbourhood (equation [20). During the back-propagation of spatial RNNs, gradients flow withir\nlayers, 2= (between layers) is calculated in the same way as normal convolutional layers.\nsp = UXP + VXEt +6\nXi = f (sr)\nsp =UXP+VX/4' +0\nThe complete FCNs architecture used in the paper.\n4096\nfeats\n\nKemel_size =\n\nKemel_size =\nTR7XS12X4096 1x1x4096X4096\n4096\n\nfeats\n\nKemel_size= Kernel size =\nTR7XS12X4096 1x1x4096X4096\n\u201cigure /: Complete PCINS used extensively in the paper\n\nin FCN-32s, output feature maps of spatial size 12 x 12 is directly up-sampled by 32 times.\n\nin FCN-16s, output feature maps of spatial size 12 x 12 is first up-sampled by 2, then sum up with\nhe prediction scores calculated from feature maps of spatial size 24 x 24, and up-sample by 16\nimes.\n\nin FCN-8s, the summed prediction scores are further up-sampled by 2, then sum up with the predic-\nion scores calculated from feature maps of spatial size 48 x 48, and up-sample by 8 times.\n\nKernel Sizes for the fully connected layers :\n\ncl: 7x 7x 512 x 4096, fc2:1x 1 x 4096 x 4096, fc3: 1 x 1 x 4096 x 21\nc4:1x1x512x 1024, fe5:1x 1x 1024 x 1024, fc6:1x 1x 1024 x 21\n\n7. T OC TS OMA SY INDA \u00a3n.2.- 7 070 INDA FINDA \u00a3.0-1 070 INDIA LS 91"}]
rJq_YBqxx
[{"section_index": "0", "section_name": "DEEP CHARACTER-LEVEL NEURAL MACHINE\nTRANSLATION BY LEARNING MORPHOLOGY", "section_text": "Shenjian Zhao\nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\nShanghai 200240, China\nNeural machine translation aims at building a single large neural network that can\nbe trained to maximize translation performance. The encoder-decoder architecture\nwith an attention mechanism achieves a translation performance comparable to the\nexisting state-of-the-art phrase-based systems. However, the use of large vocabulary\nbecomes the bottleneck in both training and improving the performance. In this\npaper, we propose a novel architecture which learns morphology by using two\nrecurrent networks and a hierarchical decoder which translates at character level.\nThis gives rise to a deep character-level model consisting of six recurrent networks.\nSuch a deep model has two major advantages. It avoids the large vocabulary issue\nradically; at the same time, it is more efficient in training than word-based models.\nOur model obtains a higher BLEU score than the bpe-based model after training\nfor one epoch on En-Fr and En-Cs translation tasks. Further analyses show that\nour model is able to learn morphology."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Neural machine translation (NMT) attempts to build a single large neural network that reads <\n\nsentence and outputs a translation (Sutskever et al.||2014). Most of the extant neural machine\ntranslations models belong to a family of word-level encoder-decoders Che\n\n2014). Recently,|Bahdanau et al.|(2015) proposed a model with attention mechanism whicl\n\nautomatically searches the alignments and greatly im,\nlarge vocabulary seems necessary for the word-level\n\nproves the performance. However, the use of <\nneural machine translation models to improve\n\nperformance (Sutskever et al.]{2014}|Cho et al.| (2015).\n(2016a) listed three reasons behind the wide adoption of word-level modeling: (i) wore\nis a basic unit of a language, (ii) data sparsity, (iii) vanishing gradient of character-level modeling\nConsider that a language itself is an evolving system. So it is impossible to cover all words in the\nlanguage. The problem of rare words that are out of vocabulary (OOV) is a critical issue which car\neffect the performance of neural machine translation. In particular, using larger vocabulary doe:\nimprove performance 2014} Cho et al.||2015). However, the training become:\n\nmuch harder and the vocabulary is often filled with many similar words that share a lexeme but have\ndifferent morphology.\nThere are many approaches to dealing with the out-of-vocabulary issue. For example, |Gulcehre\nfet al.| (2016); [Luong et al.|(2015) proposed to obtain the alignment information of\ntarget unknown words, after which simple word dictionary lookup or identity copy can be performed\nto replace the unknown words in translation. However, these approaches ignore several important\nproperties of languages such as monolinguality and crosslinguality as pointed out by|Luong and"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Intuitively, it is elegant to directly model pure characters. However, as the length of sequenc\u00ab\ngrows significantly, character-level translation models have failed to produce competitive result:\ncompared with word-based models. In addition, they require more memory and computation resource\nEspecially, it is much difficult to train the attention component. For example, {Ling et al.|(2015a\nproposed a compositional character to word (C2W) model and applied it to machine translation (Lins\n[2015b). They also used a hierarchical decoder which has been explored before in other contex\n(Serban et al. (2015). However, they found it slow and difficult to train the character-level models, anc\none has to resort to layer-wise training the neural network and applying supervision for the attentiot\ncomponent. In fact, such RNNs often struggle with separating words that have similar morphologie:\nbut very different meanings.\nIn order to address the issues mentioned earlier, we introduce a novel architecture by exploiting the\nstructure of words. It is built on two recurrent neural networks: one for learning the representatior\nof preceding characters and another for learning the weight of this representation of the whol\nword. Unlike subword-level model based on the byte pair encoding (BPE) algorithm (Sennrich et al.\n\n16), we learn the subword unit automatically. Compared with CNN word encoder (Kim et al.\n2016 ), our model is able to generate a meaningful representation of the word. Tc\ndecode at character level, we devise a hierarchical decoder which sets the state of the second-leve\nRNN (character-level decoder) to the output of the first-level RNN (word-level decoder), which wil\ngenerate a character sequence until generating a delimiter. In this way, our model almost keeps the\nsame encoding length for encoder as word-based models but eliminates the use of a large vocabulary\nFurthermore, we are able to efficiently train the deep model which consists of six recurrent networks\nachieving higher performance.\nIn summary, we propose a hierarchical architecture (character -> subword -> word -> source sentence\n-> target word -> target character) to train a deep character-level neural machine translator. We show\nthat the model achieves a high translation performance which is comparable to the state-of-the-ar\nneural machine translation model on the task of En-Fr, En-Cs and Cs-En translation. The experiment:\nand analyses further support the statement that our model is able to learn the morphology.\nNeural machine translation is often implemented as an encoder-decoder architecture. The encode1\nusually uses a recurrent neural network (RNN) or a bidirectional recurrent neural network (BiRNN)\n\n{Schuster and Paliwall 1997) to encode the input sentence x = {2,..., x7, } into a sequence of\nidden states h = {h,.....h7}:\nP(Yt | (Y1s-- +> Ye-1s) = gle(ye-1), Se, Ct),"}, {"section_index": "3", "section_name": "where", "section_text": "Ssp= fo(e(ye\u20141), St\u20141, Ct)\nand g is a nonlinear and potentially multi-layered function that computes the probability of y,. The\ncontext c; depends on the sequence of {hi,..., hr, }. (2014) encoded all informatior\nin the source sentence into a fixed-length vector, i.e., c; = hr,. computed ec,\nby the alignment model which handles the bottleneck that the former approach mee\nft=1\nwhere e(x;) \u20ac R\u2122 is an m-dimensional embedding of x. The decoder, another RNN, is often\nrained to predict next word y; given previous predicted words {y;,... , yz\u20141} and the context vector\nc,; that is,\nThe whole model is jointly trained by maximizing the conditional log-probability of the correct\ntranslation given a source sentence with respect to the parameters of the model 0:"}, {"section_index": "4", "section_name": "} DEEP CHARACTER-LEVEL NEURAL MACHINE TRANSLATION", "section_text": "We consider two problems in the word-level neural machine translation models. First, how can\nwe map a word to a vector? It is usually done by a lookup table (embedding matrix) where the\nsize of vocabulary is limited. Second, how do we map a vector to a word when predicting? It is\n\nusually done via a softmax function. However, the large vocabulary will make the softmax intractable\ncomputationally."}, {"section_index": "5", "section_name": "3.1 LEARNING MORPHOLOGY IN A WORD ENCODER", "section_text": "Many words can be subdivided into smaller meaningful units called morphemes, such as \u201cany-one\u201d\n\u201cany-thing\u201d and \u201c\u2018every-one.\u201d At the basic level, words are made of morphemes which are recognizec\nas grammatically significant or meaningful. Different combinations of morphemes lead to differen\nmeanings. Based on these facts, we introduce a word encoder to learn the morphemes and the rule:\nof how they are combined. Even if the word encoder had never seen \u201ceverything\u201d before, with <\n\nunderstanding of English morphology, the word encoder could gather the meaning easily. Thu:\nlearning morphology in a word encoder might speedup training.\nThe word encoder is based on two recurrent neural networks,\nas illustrated in Figure[]] We compute the representation of the\nword \u2018anyone\u2019 as\n6\n1\nr, = f(e(xz), 11-1).\nEach r; contains information about the preceding characters.\nThe weight w, of each representation r; is computed by\nw, = exp(aff(h,)),\nwhere h, is another RNN hidden state at time \u00a2 and aff() is\nan affine function which maps h; to a scalar. Here, we use a\nBiRNN to compute h, as shown in Figure|1] Instead of nor-\nmalizing it by }>, exp(aff(h,)), we use an activation function\ntanh as it performs best in experiments.\nWe can regard the weight w; as the energy that determines whether r; is a representation of <\nmorpheme and how it contributes to the representation of the word. Compared with an embedding\nlookup table, the decoupled RNNs learn the representation of morphemes and the rules of how the}\nare combined respectively, which may be viewed as learning distributed representations of word:\nexplicitly. For example, we are able to translate \u201c\u201cconvenienter\u201d correctly which validates our idea.\nAfter obtaining the representation of the word, we could encode the sentence using a bidirectional\n\nRNN as RNNsearch (Bahdanau et al. The detailed architecture is shown in Figure[2]"}, {"section_index": "6", "section_name": "3.2 HIERARCHICAL DECODER", "section_text": "We correspondingly devise two novel architectures, a word encoder which utilizes the morphology\nand a hierarchical decoder which decodes at character level. Accordingly, we propose a deep\ncharacter-level neural machine translation model (DCNMT).\n6\nFanyone = tanh) wird)\nFigure 1: The representation of the\nword \u2019anyone.\u2019\nTo decode at the character level, we introduce a hierarchical decoder. The first-level decoder is similar\nto RNNsearch which contains the information of the target word. Specifically, s, in Eqn. (1) contains\nthe information of target word at time t. Instead of using a multi-layer network following a softmax\nfunction to compute the probability of each target word using s;, we employ a second-level decodet\nwhich generates a character sequence based on s;.\nunits instead of the GRU described here). HGRU has a settable state and\ngenerates character sequence based on the given state until generating a delimiter. In our model, the\nstate is initialized by the output of the first-level decoder. Once HGRU generates a delimiter, it will\nset the state to the next output of the first-level decoder. Given the previous output character sequence\n{yo,\u00a51---,Ye\u20141} where yo is a token representing the start of sentence, and the auxiliary sequence\n\n{ao,@1,...,@\u00a2\u20141} which only contains 0 and 1 to indicate whether y; is a delimiter (ao is set to 1)\nHGRITI undates the ctate as follows:\nSt\u2014-1 = (4 \u2014 Ge-1) 8-1 + At-18i,,\nqi o((Wae(u-1)) + [(Uygi-a)\u2019),\n2] = o((W-e(y\u20141)}\u2019 + [Uzg-1)),\na] = o([We(y1)? + [U(ar \u00a9 grs))\u2019).\ngi =2igi_, + (1\u201427)8!,\nP(ye | {y1,---;Ye-1},X) = softmax(g;).\nThe current problem is that the number of outputs of the first-level decoder is much fewer than the\ntarget character sequence. It will be intractable to conditionally pick outputs from the the first-leve\ndecoder when training in batch manner (at least intractable for Theano (Bastien et al.}/2012) anc\nother symbolic deep learning frameworks to build symbolic expressions). (2016\nuses two forward passes (one for word-level and another for character-level) in batch training whic!\n\nis less efficient. However, in our model, we use a matrix to unfold the outputs of the first-leve\ndecoder, which makes the batch training process more efficient. It is a T x T\u2019 matrix R, where T,, i:\nthe number of delimiter (number of words) in the target character sequence and T is the length o!\nthe target character sequence. Ri, j; + 1] to R[i, j2] are set as 1 if 7; is the index of the (i\u20141)-tk\ndelimiter and j2 is the index of the i-th delimiter in the target character sequence. The index of the\n0-th delimiter is set as 0. For example, when the target output is \u201cg 0 _! _\u201d and the output of the\nfirst-level decoder is [s;. $5], the unfolding step will be:\ntherefore {S;,,Si,,Si,,Si,,Si,} is correspondingly set to {s1,s1,$1,S2,S2} in HGRU iterations\nAfter this procedure, we can compute the probability of each target character by the second-level\ndecoder according to Eqns"}, {"section_index": "7", "section_name": "3.3. MODEL ARCHITECTURES", "section_text": "There are totally six recurrent neural networks in our model, which can be divided into four layers as\nshown in Figure[2] Figure[jillustrates the training procedure of a basic deep character-level neural\nmachine translation. It is possible to use multi-layer recurrent neural networks to make the model\ndeeper. The first layer is a source word encoder which contains two RNNs as shown in Figure|1] The\nnd layer is a bidirectional RNN sentence encoder which is identical to that of (Bahdanau et al.\nThe third layer is the first-level decoder. It takes the representation of previous target word\ns a feedback, which is produced by the target word encoder in our model. As the feedback is less\nimportant, we use an ordinary RNN to encode the target word. The feedback ry,_, then combines the\nprevious hidden state u;_1 and the context c; from the sentence encoder to generate the vector s;:\nWith the state of HGRU in the second-level decoder setting to s, and the information of previou:\ngenerated character, the second-level decoder generates the next character until generating an end o\nsentence token (denoted as </s> in Figure[2). With such a hierarchical architecture, we can train ou\ncharacter-level neural translation model perfectly well in an end-to-end fashion.\n2-1 = (1 = a_i) 1-1 + Gi-18%,,\nqi = o((Wae(u-1)) + [(Uygi-a)\u2019),\n2] = o((W-e(y\u20141)}\u2019 + [Uzg-1)),\na] = o([We(y-1)? + [U(r \u00a9 grs))\u2019),\ngi =z)g) ,+(1\u20142))e,\nwhere s;, is the output of the first-level decoder which calculated as Eqn. (8). We can compute the\nprobability of each target character y,; based on g; with a softmax function:\n1\n[S1, S2] lo Fi 0 : 1] = fr 8181828)\ns, = Wic, + Wory,_, + W3u;-1 + b.\nFigure 2: Deep character-level neural machine translation. The HGRUs with red border indicate that\nthe state should be set to the output of the first-level decoder."}, {"section_index": "8", "section_name": "3.4 GENERATION PROCEDURE", "section_text": "We first encode the source sequence as in the training procedure, then we generate the target sequence\ncharacter by character based on the output s; of the first-level decoder. Once we generate a delimiter\nwe should compute next vector s;41 according to Eqn. (8) by combining feedback ry, from the targe\nword encoder, the context c;+1 from the sentence encoder and the hidden state u,. The generatiot\nprocedure will terminate once an end of sentence (EOS) token is produced.\nWe implement the model using Theano (Bergstra et al. 2010} Bastien et al.| 2012) and Blocks (van\nMerrignboer eta\n\n2015), the source code and the trained models are available at github[!] We train\nour model on a single GTX Titan X with 12GB RAM. First we evaluate our model on English-to\nFrench translation task where the languages are morphologically poor. For fair comparison, we\nuse the same dataset as in RNNsearch which is the bilingual, parallel corpora provided by ACL\nWMT?\u2019 14. In order to show the strengths of our model, we conduct on the English-to-Czech and\nCzech-to-English translation tasks where Czech is a morphologically rich language. We use the same\ndataset as (Chung et al. 2016al Lee et al.| 2016) which is provided by ACL WMT\u2019 197]"}, {"section_index": "9", "section_name": "4.1 DATASET", "section_text": "n d e </d> </s>\nYer2 Yrs Vera Yrs Ytro\nA A A\n\nH H H H\nG G G G\nR R R R\nSecond-level Decoder U} Lu} Lud Ly\nmatrix R\nEN\nSs Si\nTarget Word Encoder t 1\nyt yt\n>| G G\nPR R >\n. Mel ul Ut su} we\nFirst-level Decoder\nct Ct+1\n\nBidirectional RNN Sentence Encoder\n\nAxi Axe Axa\n\nSource Word Encoder\n\nnn a To\nX} XX} OX OXS GOT XQ XD IO, XY] XQ XZ XG\n\nHoe 1 1 o <i> w \u00b0 r 1 d </d> </s> </d>\nWe use the parallel corpora for two language pairs from WMT: En-Cs and En-Fr. They consist of\n15.8M and 12.1M sentence pairs, respectively. In terms of preprocessing, we only apply the usual\ntokenization. We choose a list of 120 most frequent characters for each language which coveres nearly\n100% of the training data. Those characters not included in the list are mapped to a special token\n(<unk>). We use newstest2013(Dev) as the development set and evaluate the models on newstest201:\n(Test). We do not use any monolingual corpus."}, {"section_index": "10", "section_name": "4.2 TRAINING DETAILS", "section_text": "We follow (Bahdanau et al.|/2015) to use similar hyperparameters. The bidirectional RNN sentence\nencoder and the hierarchical decoder both consists of two-layer RNNs, each has 1024 hidden units;\nWe choose 120 most frequent characters for DCNMT and the character embedding dimensionality is\n\n64. The source word is encoded into a 600-dimensional vector. The other GRUs in our model have\n512 hidden units.\nWe use the ADAM optimizer (Kingma and Ba}|2015) with minibatch of 56 sentences to train eacl\nmodel (for En-Fr we use a minibatch of 72 examples). The learning rate is first set to 10~\u00b0 and ther\nannealed to 10~4.\nWe use a beam search to find a translation that approximately maximizes the conditional log\n\nprobability which is a commonly used approach in neural machine translation (Sutskever et al.|\nBahdanau et al.|/2015). In our DCNMT model, it is reasonable to search directly on character level to\nation.\n\ngenerate a trans.\nWe conduct comparison of quantitative results on the En-Fr, En-Cs and Cs-En translation tasks in\nSection[5.1] Apart from measuring translation quality, we analyze the efficiency of our model and\neffects of character-level modeling in more details.\nTable 1: BLEU scores of different models on three language pairs\nIn Table [I] \u201cLength\u201d indicates the maximum sentence length in training (based on the number of\nwords or characters), \u201cSize\u201d is the total number of parameters in the models. We report the BLEU\nWe illustrate the efficiency of the deep character-level neural machine translation by comparing with\n\nthe bpe-based subword model (Sennrich et al.|/2016) and other character-level models. We measure\n).\n\nthe performance by BLEU score (Papineni et al.\nModel Size Src Trgt Length Epochs Days | Dev Test\nbpe2bpe\" : bpe bpe 50 50 - - 26.91 29.70\n| c2w? ~ 54M char char | 300 300] ~2.8 ~ 27 | 25.89 27.04\n5 CNMT ~ 52M char char 300 300} ~3.8 ~21) 28.19 29.38\n1 ~ 7 | 27.02 28.13\n\nDCNMT ~ 54M h h 300 300\n. char ener ~28 ~19 | 29.31 30.56\nbpe2bpe\" : bpe bpe 50 50 - - 15.90 13.84\nbpe2char\u2122 2 bpe char 50 500 - - 2 16.86\n6 char 2 char char 600 600 >4 ~ 90 2 17.5\na hybrid\u00ae ~ 250M | hybrid hybrid 50 50 >4 ~21 - 19.6\n1 ~ 5 | 15.50 14.87\n\nDCNMT ~ 54M h h 450 450\n. char ener ~29 ~15 | 17.89 16.96\nbpe2bpe\" : bpe bpe 50 50 - - 21.24 20.32\n= | bpe2char | ~ 76M bpe char 50 500 | ~6.1 ~14 | 23.27 22.42\n@ | char2char\u00ae | ~ 69M char char | 450 450 | ~7.9 ~30 | 23.38 22.46\n\n1S) ~ 5\n\nDCNMT | ~54M | char char | 450 450 1 5 | 20.50 19.75\n~46 ~22 | 23.24 22.48\nscores of DCNMT when trained after one epoch in the above line and the final scores in the following\nline. The results of other models are taken from (1)Firat et al. (2016), (3)Chung et al.|(2016a), (4)Lee\ng\n\n(2016) and (5)Luong and Manning\u2019 (2016) respectively, except (2) is trained according to{Lin\n(2015)\n\n'b). The only difference between CNMT and DCNMT is CNMT uses an ordinary RNN\nto encode source words (takes the last hidden state). The training time for (3) and (4) is calculated\nbased on the training speed in (2016). For each test set, the best scores among the models\nper language pair are bold-faced. Obviously, character-level models are better than the subword-level\nmodels, and our model is comparable to the start-of-the-art character-level models. Note that, the\npurely character model of (5)(Luong and Manning}|2016) took 3 months to train and yielded +0.5\nBLEU points compared to our result. We have analyzed the efficiency of our decoder in Section[3.2\nBesides, our model is the simplest and the smallest one in terms of the model size.\nFigure 3: Two-dimensional PCA projection of the 600-dimensional representation of the words\nIn this section, we investigate whether our model could learn morphology. First we want to figure out\nthe difference between an ordinary RNN word encoder and our word encoder. We choose some words\nwith similar meaning but different in morphology as shown in Figure[3] We could find in Figure\nthat the words ending with \u201cability\u201d, which are encoded by the ordinary RNN word encoder, are\njammed together. In contrast, the representations produced by our encoder are more reasonable and\nthe words with similar meaning are closer.\nos\n\n05\n\n\u2018o- relidble \u00b0\n\u00a9 notable\n& solvable 15\n\u00a9 flexible\u00ae- reliability \u00a9 notability\n@ solvabilify- flexibility)?\n\u00a9 solvable\nos\nabl * capable] \u00a9 solvability\n\n@ cfpabeee 7 \u00ae& capability e=rullable<:\n\n| ee rahability\n_ eapaiaiilty os\n\npossible. ssibility\n\nA\n\n\u00a9 possible 15 flexible sibility\npossibilty \u00b0\n\n2 46 4 05 0 05 1 15 2 45. 4 05 0 05 \u00ab+1 \u00ab15 2\n\n(a) ordinary RNN word encoder\n\n(b) our word encoder\n02\n\n0.18\n\n0.16\n\n014\n\n0.12\n\non\n\n0.08\n\n0.08\n\n0.04\n\n0.02\n\n0.15\n\n(a) energy of each character\n\non \u00b0\u2014 anybody\n*\u2014 anyone everybod)\na USHERS,\n005\n0\n\u00a9 everyway\n-0.05 ox any \u00ae everything\n22K ng\n\u2018\u00a9 everywhere\n\u201c04 \u00b0\u2014 anywhere\n0.18\n02 015 01 005 0 +005 \u00ab01 01502\n\n(b) two-dimensional PCA projection\nThen we analyze how our word encoder learns morphemes and the rules of how they are combined.\nWe demonstrate the encoding details on \u201cany*\u201d and \u201cevery*\u201d. Figure[4(a)|shows the energy of each\ncharacter, more precisely, the energy of preceding characters. We could see that the last character\nof a morpheme will result a relative large energy (weight) like \u201cany\u201d and \u201cevery\u201d in these words.\nMoreover, even the preceding characters are different, it will produce a similar weight for the same\nmorpheme like \u201cway\u201d in \u201canyway\u201d and \u201ceveryway\u201d. The two-dimensional PCA projection in Figure\n4(b)|further validates our idea. The word encoder may be able to guess the meaning of \u201ceverything\u201d\n\neven it had never seen \u201ceverything\u201d before, thus speedup learning. More interestingly, we find that\nnot only the ending letter has high energy, but also the beginning letter is important. It matches the\nbehavior of human perception (White et al.|[2008).\nFigure 5: Subword-level boundary detected by our word encoder\nAs analyzed in Section learning morphology could speedup learning. This has also been shown\nin Table [I] (En-Fr and En-Cs task) from which we see that when we train our model just for one\nepoch, the obtained result even outperforms the final result with bpe baseline.\nAnother advantage of our model is the ability to translate the misspelled words or the nonce words\nThe character-level model has a much better chance recovering the original word or sentence. In\nTable [2] we list some examples where the source sentences are taken from newstest2013 but we\nchange some words to misspelled words or nonce words. We also list the translations from Google\ntranslate/]and online demo of neural machine translation by LISA.\nTable 2: Sample translations.\n(a) Misspelled words\nSource For the time being howeve their research is unconclusive.\n\nReference Leurs recherches ne sont toutefois pas concluantes pour |\u2019 instant.\n\nGoogle translate | Pour le moment, leurs recherches ne sont pas concluantes.\n\nLISA Pour le moment UNK leur recherche est UNK.\n\nDCNMT Pour le moment, cependant, leur recherche n\u2019est pas concluante.\n(b) Nonce words (morphological change)\nAs listed in Table fa), DCNMT is able to translate out the misspelled words correctly. For a\nword-based translator, it is never possible because the misspelled words are mapped into <unk>\n*The translations by Google translate were made on Nov 4, 2016.\nMoreover, we apply our trained word encoder to Penn Treebank Line 1. Unlike\nwe are able to detect the boundary of the subword units. As shown in Figure |5| \u201cconsumers\n\u201cmonday\u201d, \u201cfootball\u201d and \u201cgreatest\u201d are segmented into \u201cconsum-er-s\u201d,\u201c\u201cmon-day\u201d, \u201cfoot-ball\u201d and\n\u201cgreat-est\u201d respectively. Since there are no explicit delimiters, it may be more difficult to detect the\n\nsubword units.\nSource Then we will be able to supplement the real world with virtual objects in\na much convenienter form .\n\nReference Ainsi , nous pourrons compl\u00e9ter le monde r\u00e9el par des objets virtuels\ndans une forme plus pratique .\n\nGoogle translate | Ensuite, nous serons en mesure de compl\u00e9ter le monde r\u00e9el avec des\nobjets virtuels dans une forme beaucoup plus pratique.\n\nLISA Ensuite, nous serons en mesure de compl\u00e9ter le vrai monde avec des\nobjets virtuels sous une forme bien UNK.\n\nDCNMT Ensuite, nous serons en mesure de compl\u00e9ter le monde r\u00e9el avec des\nobjets virtuels dans une forme beaucoup plus pratique.\ntoken before translating. Thus, it will produce an <unk> token or just take the word from source\nsentence (Gulcehre et al.|/2016} {Luong et al. 2015p. More interestingly, DCNMT could translate\n\u201cconvenienter\u201d correctly as shown in Table]2[b). By concatenating \u201cconvenient\u201d and \u201cer\u201d, we get the\ncomparative adjective form of \u201cconvenient\u201d which never appears in the training set; however, out\nmodel guessed it correctly based on the morphemes and the rules.\nIn this paper we have proposed an hierarchical architecture to train the deep character-level neural\nmachine translation model by introducing a novel word encoder and a multi-leveled decoder. We have\ndemonstrated the efficiency of the training process and the effectiveness of the model in comparison\nwith the word-level and other character-level models. The BLEU score implies that our deep character\nlevel neural machine translation model likely outperforms the word-level models and is competitive\nwith the state-of-the-art character-based models. It is possible to further improve performance by\nusing deeper recurrent networks (Wu et al.|{2016), training for more epochs and training with longer\nsentence pairs.\nAs a result of the character-level modeling, we have solved the out-of-vocabulary (OOV) issue that\nword-level models suffer from, and we have obtained a new functionality to translate the misspelled o1\nthe nonce words. More importantly, the deep character-level is able to learn the similar embedding of\nthe words with similar meanings like the word-level models. Finally, it would be potentially possible\nthat the idea behind our approach could be applied to many other tasks such as speech recognition\nand text summarization."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Kyunghyun Cho, Bart Van Merri\u00e9nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger\nSchwenk, and Yoshua Bengio. Learning phrase representations using rn encoder-decoder fot\nstatistical machine translation. Proceedings of the 2014 Conference on Empirical Methods in\nNatural Language Processing, 2014.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly\nlearning to align and translate. International Conference on Learning Representation, 2015.\nJunyoung Chung, Kyunghyun Cho, and Yoshua Bengio. A character-level decoder without explicit\nsegmentation for neural machine translation. Proceedings of the 54th Annual Meeting of the\nAssociation for Computational Linguistics, 2016a.\n[lya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.\nIn Advances in Neural Information Processing Systems, pages 3104-3112, 2014.\nYoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural languag\nmodels. Association for the Advancement of Artificial Intelligence, 2016.\nJason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully character-level neural machine translation\nwithout explicit segmentation. arXiv preprint arXiv:1610.03017, 2016.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of\ngated recurrent neural networks on sequence modeling. arXiv preprint arXiv: 1412.3555, 2014.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 9(8):\n1735-1780. 1997.\nFr\u00e9d\u00e9ric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud\nBergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements.\nDeep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.\nJames Bergstra, Olivier Breuleux, Fr\u00e9d\u00e9ric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume\nDesjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPL\nmath expression compiler. In Proceedings of the Python for Scientific Computing Conference\n(SciPy), June 2010. Oral Presentation.\nKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic\nevaluation of machine translation. pages 311-318. Association for Computational Linguistics,\n2002.\n\nOrhan Firat, Kyunghyun Cho, and Yoshua Bengio. Multi-way, multilingual neural machine translation\nwith a shared attention mechanism. In Proceedings of the 2016 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language Technologies., 2016.\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey.\nMaxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google\u2019s neural machine translation sys-\ntem: Bridging the gap between human and machine translation. arXiv preprint arXiv: 1609.08144,\n2016."}, {"section_index": "12", "section_name": "A DETAILED DESCRIPTION OF THE MODEL", "section_text": "Here we describe the implementation using Theano, it should be applicable to other symbolic deer\nlearning frameworks. We use f to denote the transition of the recurrent network."}, {"section_index": "13", "section_name": "A.1 SOURCE WORD ENCODER", "section_text": "As illustrated in Section he word encoder is based on two recurrent neural networks. We compute\nthe representation of the word \u2018anyone\u2019 as\nYanyone = tanh(}> wilt);\n\nt=1\nThe backward state h,; \u20ac R\u2019 is computed similarly, however in a reverse order."}, {"section_index": "14", "section_name": "A.3 FIRST-LEVEL DECODER", "section_text": "uy = (1 \u2014 z) ow_1 + 2, 0 Ty\na, = tanh(Wry,_, + U[q, o wi] + C\nZ, = o(W.ry,_, + Uzw_1 + Cc)\nqe = o(Wary,_, + Ugu-1 + Cyex).\nry,_, is the representation of the target word which is produced by an ordinary RNN (take the la:\nstate). The context vector c; is computed by the attention mechanism at each step:\nr, = f(e(a),re-1).\nEach r; contains information about the preceding characters. The weight w; of each representation\nr, is computed by\nw, = exp(W wh; + 5),\nwhere W,,, \u20ac IR***' maps the vector h; \u20ac R~\u2019 to a scalar and h; is the state of the BiRNN at time t:\nTala\nf(e(x), his).\nAfter encoding the words by the source word encoder, we feed the representations to the\nsource sentence encoder. For example, the source \u201cHello world </s>\u201d is encoded into a vector\n[THello, Pworld; P</s>], then the BiRNN sentence encoder encodes this vector into [v;, v2, v3]. The com:\n\nputation is the same as Eqn. (Q) and Eqn. {10}, however the input now changes to the representation\nof the words.\n5) which utilizes the attention mechanism.\n\nThe first-level decoder is similar to[Bahdanau et al.\n\nGiven the context vector c; from encoder, the hidden state u,; \u20ac R\u201d\u2122 of the GRU is computed by\nexp(ez;)\n\nDees exp(ein)\net) = Etanh(W,u,-1 + Heh\n\nOy\nE \u00a2 R\u2019*\u201d which maps the vector into a scalar. Then the hidden state u, is further processed a:\nEqn. {8) before feeding to the second-level decoder:\nSt41\n\nWiceyi + Wary,\nAs described in Section[3.2] the number of outputs of the first-level decoder is much fewer than the\ntarget character sequence. It will be intractable to conditionally pick outputs from the the first-level\ndecoder when training in batch manner (at least intractable for Theano and other\nsymbolic deep learning frameworks to build symbolic expressions). We use a matrix R. \u20ac RTv*7\nto unfold the outputs [s1, ... ,87,] of the first-level decoder (T;, is the number of words in the target\nsentence and T\u2019 is the number of characters). R is a symbolic matrix in the final loss, it is constructed\naccording the delimiters in the target sentences when training (see Section [3.2] for the detailed\nconstruction, note that R is a tensor in batch training. ). After unfolding, the input of HGRU becomes\n[Si,,---, Si], that is\nP(ye | {y1,---;Ye-1},X) = softmax(g;).\nFinally, we could compute the cross-entroy loss and train with SGD algorithm.\nWe show additional sample translations in the following Tables.\nSource\n\nThis \" disturbance \" produces an electromagnetic wave ( of light , infrared\n, ultraviolet etc . ) , and this wave is nothing other than a photon - and\nthus one of the \" force carrier \" bosons .\n\nReference\n\nQuand , en effet , une particule ayant une charge \u00e9lectrique acc\u00e9l\u00e9re ou\nchange de direction , cela \" d\u00e9range \" le champ \u00e9lectromagn\u00e9tique en cet\nendroit pr\u00e9cis , un peu comme un caillou lanc\u00e9 dans un \u00e9tang .\n\nDCNMT\n\nLorsque , en fait , une particule ayant une charge \u00e9lectrique acc\u00e9l\u00e9re ou\nchange de direction , cela \" perturbe \" le champ \u00e9lectromagn\u00e9tique dans\ncet endroit sp\u00e9cifique , plut6t comme un galet jet\u00e9 dans un \u00e9tang .\n\nSource\n\nSince October , a manifesto , signed by palliative care luminaries includ-\ning Dr Balfour Mount and Dr Bernard Lapointe , has been circulating to\ndemonstrate their opposition to such an initiative .\n\nReference\n\nDepuis le mois d\u2019 octobre , un manifeste , sign\u00e9 de sommit\u00e9s des soins\npalliatifs dont le Dr Balfour Mount et le Dr Bernard Lapointe , circule\npour t\u00e9moigner de leur opposition une telle initiative .\n\nDCNMT\n\nDepuis octobre , un manifeste , sign\u00e9 par des liminaires de soins palliatifs\n, dont le Dr Balfour Mount et le Dr Bernard Lapointe , a circul\u00e9 pour\nd\u00e9montrer leur opposition a une telle initiative .\nTable 3: Sample translations of En-Fr.\nource\n\nThis \" disturbance \" produces an electromagnetic wave ( of light , infrared\n, ultraviolet etc . ) , and this wave is nothing other than a photon - and\nthus one of the \" force carrier \" bosons .\n\neference\n\nQuand , en effet , une particule ayant une charge \u00e9lectrique acc\u00e9l\u00e9re ou\nchange de direction , cela \" d\u00e9range \" le champ \u00e9lectromagn\u00e9tique en cet\nendroit pr\u00e9cis , un peu comme un caillou lanc\u00e9 dans un \u00e9tang .\n\nCNMT\n\nLorsque , en fait , une particule ayant une charge \u00e9lectrique acc\u00e9l\u00e9re ou\nchange de direction , cela \" perturbe \" le champ \u00e9lectromagn\u00e9tique dans\ncet endroit sp\u00e9cifique , plut6t comme un galet jet\u00e9 dans un \u00e9tang .\n\nource\n\nSince October , a manifesto , signed by palliative care luminaries includ-\ning Dr Balfour Mount and Dr Bernard Lapointe , has been circulating to\ndemonstrate their opposition to such an initiative .\n\neference\n\nDepuis le mois d\u2019 octobre , un manifeste , sign\u00e9 de sommit\u00e9s des soins\npalliatifs dont le Dr Balfour Mount et le Dr Bernard Lapointe , circule\npour t\u00e9moigner de leur opposition une telle initiative .\n\nCNMT\n\nDepuis octobre , un manifeste , sign\u00e9 par des liminaires de soins palliatifs\n, dont le Dr Balfour Mount et le Dr Bernard Lapointe , a circul\u00e9 pour\nd\u00e9montrer leur opposition a une telle initiative .\nTable 5: Sample translations of Cs-En\ndadle Oo. sample Wansiavions OF US-En,\n\nSource Prezident Karzai nechce zahrani\u00e9ni kontroly , zejm\u00e9na ne pii prileZitosti\nvoleb planovanych na duben 2014 .\n\nReference | President Karzai does not want any foreign controls , particularly on the\noccasion of the elections in April 2014 .\n\nDCNMT | President Karzai does not want foreign controls , particularly in the\nopportunity of elections planned on April 2014 .\n\nSource ManZelsky par m\u00e9l dv\u00e9 d\u00e9ti , Prestona a Heidi , a dlouhou dobu Zili v\nkalifornsk\u00e9m m\u00e9st\u00e9 Malibu , kde pobyva mnoho celebrit .\n\nReference | The couple had two sons , Preston and Heidi , and lived for a long time\nin the Californian city Malibu , home to many celebrities .\n\nDCNMT_ | The married couple had two children , Preston and Heidi , and long lived\nin the California city of Malibu , where many celebrities resided .\n\nSource Trestny Cin rouhdni je zachovan a urdzZka je naddle zakazana , coz by\nmohlo mit vaZn\u00e9 disledky pro svobodu vyjadiovani , zejm\u00e9na pak pro\ntisk .\n\nReference | The offence of blasphemy is maintained and insults are now prohibited\n, which could have serious consequences on freedom of expression ,\nparticularly for the press .\n\nDCNMT | The criminal action of blasphemy is maintained and insult is still prohib-\n\nited , which could have serious consequences for freedom of expression ,\nespecially for the press .\nTable 4: Sample translations of En-Cs.\nSource\n\nFrench troops have left their area of responsibility in Afghanistan (\nKapisa and Surobi ) .\n\nReference\n\nFrancouzsk\u00e9 jednotky opustily svou oblast odpov\u00e9dnosti v Afghanistanu\n( Kapisa a Surobi ) .\n\nDCNMT\n\nFrancouzsk\u00e9 jednotky opustily svou oblast odpov\u00e9dnosti v Afghdnistanu\n( Kapisa a Surois ) .\n\nSource\n\n\" All the guests were made to feel important and loved \" recalls the top\nmodel , who started working with him during Haute Couture Week Paris\n,in 1995.\n\nReference\n\nV8ichni pozvani se diky n\u00e9mu mohli citit dileZiti a milovani ,\" vzpomind\ntop modelka , kterd s nim za\u00e9ala pracovat v prib\u00e9hu Parfzsk\u00e9ho tydne\nvrcholn\u00e9 m\u00e9dy v roce 1995 .\n\nDCNMT\n\n\" V8ichni host\u00e9 byli provedeni , aby se citili dilezitf a milovani \"\n\npripomina nejvyssi model , ktery s nim za\u00e9al pracovat v prib\u00e9hu ty-\ndenfku Haute Coutupe v ParfzZi v roce 1995 .\n\nSource\n\n\" There are so many private weapons factories now , which do not endure\ncompetition on the international market and throw weapons from under\nthe counter to the black market , including in Moscow , \" says the expert\n\nReference\n\n\" V sou\u00e9asnosti vznikajf soukrom\u00e9 zbrojaisk\u00e9 podniky , kter\u00e9 nejsou\nkonkurenceschopn\u00e9 na mezinarodnim trhu , a vyrazuji zbran\u00e9 , kter\u00e9\ndodavajf na \u00e9erny trh v\u00e9etn\u00e9 Moskvy , \" ffkd tento odbornik .\n\nDCNMT\n\n\" V sou\u00e9asnosti existuje tolik soukromych zbrani , kter\u00e9 nevydrzi\nhospodaiskou sout\u00e9Z na mezinarodnim trhu a hodi zbran\u00e9 pod pultem k\n\u00e9ern\u00e9mu trhu , v\u00e9etn\u00e9 Moskvy , \" fik\u00e9 odbornik .\nSource\n\nFrench troops have left their area of responsibility in Afghanistan (\nKapisa and Surobi ) .\n\nReference\n\nFrancouzsk\u00e9 jednotky opustily svou oblast odpov\u00e9dnosti v Afghanistanu\n( Kapisa a Surobi ) .\n\nDCNMT\n\nFrancouzsk\u00e9 jednotky opustily svou oblast odpov\u00e9dnosti v Afghdnistanu\n( Kapisa a Surois ) .\n\nSource\n\n\" All the guests were made to feel important and loved \" recalls the top\nmodel , who started working with him during Haute Couture Week Paris\n,in 1995.\n\nReference\n\nV8ichni pozvani se diky n\u00e9mu mohli citit dileZiti a milovani ,\" vzpomind\ntop modelka , kterd s nim za\u00e9ala pracovat v prib\u00e9hu Parfzsk\u00e9ho tydne\nvrcholn\u00e9 m\u00e9dy v roce 1995 .\n\nDCNMT\n\n\" V8ichni host\u00e9 byli provedeni , aby se citili dilezitf a milovani \"\npripomina nejvyssi model , ktery s nim za\u00e9al pracovat v prib\u00e9hu ty-\n\ndeniku Haute Coutupe v PaffZi v roce 1995 .\n\nSource\n\n\" There are so many private weapons factories now , which do not endure\ncompetition on the international market and throw weapons from under\nthe counter to the black market , including in Moscow , \" says the expert\n\nReference\n\n\" V sou\u00e9asnosti vznikajf soukrom\u00e9 zbrojaisk\u00e9 podniky , kter\u00e9 nejsou\nkonkurenceschopn\u00e9 na mezinarodnim trhu , a vyrazuji zbran\u00e9 , kter\u00e9\ndodavajf na \u00e9erny trh v\u00e9etn\u00e9 Moskvy , \" ffkd tento odbornik .\n\nDCNMT\n\n\" V sou\u00e9asnosti existuje tolik soukromych zbrani , kter\u00e9 nevydrzi\nhospodaiskou sout\u00e9Z na mezinarodnim trhu a hodi zbran\u00e9 pod pultem k\n\u00e9ern\u00e9mu trhu , v\u00e9etn\u00e9 Moskvy , \" fik\u00e9 odbornik ."}]
BybtVK9lg
[{"section_index": "0", "section_name": "AUTOENCODING VARIATIONAL INFERENCE\nFOR TOPIC MODELS", "section_text": "Akash Srivastava\nInformatics Forum, University of Edinburg\n10, Crichton St\nEdinburgh, EH89AB, UK\nTopic models are one of the most popular methods for learning representations o!\ntext, but a major challenge is that any change to the topic model requires mathe:\nmatically deriving a new inference algorithm. A promising approach to addres:\nthis problem is autoencoding variational Bayes (AEVB), but it has proven diffi:\ncult to apply to topic models in practice. We present what is to our knowledge the\nfirst effective AEVB based inference method for latent Dirichlet allocation (LDA)\nwhich we call Autoencoded Variational Inference For Topic Model (AVITM). Thi:\nmodel tackles the problems caused for AEVB by the Dirichlet prior and by com:\nponent collapsing. We find that AVITM matches traditional methods in accuracy\nwith much better inference time. Indeed, because of the inference network, we\nfind that it is unnecessary to pay the computational cost of running variationa\noptimization on test data. Because AVITM is black box, it is readily appliec\nto new topic models. As a dramatic illustration of this, we present a new topic\nmodel called ProdLDA, that replaces the mixture model in LDA with a produc\u2019\nof experts. By changing only one line of code from LDA, we find that ProdLDA\nyields much more interpretable topics, even if LDA is trained via collapsed Gibbs\nsampling."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Topic models (Blei}/2012) are among the most widely used models for learning unsupervised repre:\nsentations of text, with hundreds of different model variants in the literature, and have have found\napplications ranging from the exploration of the scientific literature (Blei & Lafferty, tc\ncomputer vision (Fei-Fei & Perona| {2005), bioinformatics (Rogers et al.}[2005), and archaeology\n(Mimno] |2009). A major challenge in applying topic models and developing new models is the\ncomputational cost of computing the posterior distribution. Therefore a large body of work has\nconsidered approximate inference methods, the most popular methods being variational methods\n\nespecially mean field methods, and Markov chain Monte Carlo, particularly methods based on col.\nlapsed Gibbs sampling.\nBoth mean-field and collapsed Gibbs have the drawback that applying them to new topic models.\neven if there is only a small change to the modeling assumptions, requires re-deriving the infer-\nence methods, which can be mathematically arduous and time consuming, and limits the ability of\npractitioners to freely explore the space of different modeling assumptions. This has motivated the\ndevelopment of black-box inference methods (Ranganath et al.| 2014} Mnih & Gregor! 2014} [Ku-\n\ncukelbir et al.|{2016]/Kingma & Welling||2014) which require only very limited and easy to compute\ninformation from the model, and hence can be applied automatically to new models given a simple\n\ndeclarative specification of the generative process.\nAutoencoding variational Bayes (AEVB) (Kingma & Welling] 2014} Rezende et al.}|2014) is a\nparticularly natural choice for topic models, because it trains an inference network (Dayan et al.|\n1995), a neural network that directly maps a document to an approximate posterior distribution,\n* Additional affiliation: Alan Turing Institute, British Library, 96 Euston Road, London NW1 2DB\nCharles Sutton\u2019"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "without the need to run further variational updates. This is intuitively appealing because in topic\nmodels, we expect the mapping from documents to posterior distributions to be well behaved, that\nis, that a small change in the document will produce only a small change in topics. This is exactly\nthe type of mapping that a universal function approximator like a neural network should be good at\nrepresenting. Essentially, the inference network learns to mimic the effect of probabilistic inference.\nso that on test data, we can enjoy the benefits of probabilistic modeling without paying a further cost\nfor inference.\nHowever, despite some notable successes for latent Gaussian models, black box inference methods\nare significantly more challenging to apply to topic models. For example, in initial experiments,\nwe tried to apply ADVI (Kucukelbir et al.||2016), a recent black-box variational method, but it was\ndifficult to obtain any meaningful topics. Two main challenges are: first, the Dirichlet prior is not\na location scale family, which hinders reparameterisation, and second, the well known problem of\ncomponent collapsing (Dinh & Dumoulin] |2016), in which the inference network becomes stuck in\na bad local optimum in which all topics are identical.\nIn this paper, we present what is, to our knowledge, the first effective AEVB inference method fo:\ntopic models, which we call Autoencoded Variational Inference for Topic Models or AVITM|| Or\nseveral data sets, we find that AVITM yields topics of equivalent quality to standard mean-fielc\ninference, with a large decrease in training time. We also find that the inference network learn:\nto mimic the process of approximate inference highly accurately, so that it is not necessary to rur\nvariational optimization at all on test data.\nTo summarize, the main advantages of our methods are:\nOverall, our results suggest that AVITM is ready to take its place alongside mean field and collapsed\nGibbs as one of the workhorse inference methods for topic models.\nTo fix notation, we begin by describing topic modelling and AVITM\nBut perhaps more important is that AVITM is a black-box method that is easy to apply to new\nmodels. To illustrate this, we present a new topic model, called ProdLDA, in which the distribution\nover individual words is a product of experts rather than the mixture model used in LDA. We find\nthat ProdLDA consistently produces better topics than standard LDA, whether measured by auto-\nmatically determined topic coherence or qualitative examination. Furthermore, because we perform\nprobabilistic inference using a neural network, we can fit a topic model on roughly a one million\ndocuments in under 80 minutes on a single GPU, and because we are using a black box inference\nmethod, implementing ProdLDA requires a change of only one line of code from our implementation\nof standard LDA.\n1. Topic coherence: ProdLDA returns consistently better topics than LDA, even when LDA is\ntrained using Gibbs sampling.\n\n2. Computational efficiency: Training AVITM is fast and efficient like standard mean-field. On\nnew data, AVITM is much faster than standard mean field, because it requires only one forward\npass through a neural network.\n\n3. Black box: AVITM does not require rigorous mathematical derivations to handle changes in\nthe model, and can be easily applied to a wide range of topic models.\nWe describe the most popular topic model, latent Dirichlet allocation (LDA). In LDA, each doc-\nument of the collection is represented as a mixture of topics, where each topic {;, is a probability\ndistribution over the vocabulary. We also use to denote the matrix 3 = (6; ... 8x). The generative\nprocess is then as described in Algorithm]1] Under this generative model, the marginal likelihood of\nA popular approximation for efficient inference in topic models is mean field variational inference\nwhich breaks the coupling between @ and z by introducing free variational parameters y over \u00a2\nand @ over z and dropping the edges between them. This results in an approximate variationa\nposterior q(9, z|7,) = q,(8) T],, 46(2n), which is optimized to best approximate the true posterio:\np(6, z|w, a, G). The optimization problem is to minimize\nL(y,\u00a2| a, 8) = Dez [a(O, z\\7, \u00a2)||p(4, z|w, a, B)] \u2014 log p(wla, 8)\nIn fact the above equation is a lower bound to the marginal log likelihood, sometimes called ai\nevidence lower bound (ELBO), a fact which can be easily verified by multiplying and dividing (i\nby the variational posterior and then applying Jensen\u2019s inequality on its logarithm. Note that th\nmean field method optimizes over an independent set of variational parameters for each document\nTo emphasize this, we will refer to this standard method by the non-standard name of Decouple:\nMean-Field Variational Inference (DMFVI).\nAEVB (Kingma & Welling 2014} Rezende et al. 2014) is one of several recent methods that aims\n\nat \u201cblack box\u201d inference methods to sidestep this issue. First, rewrite the ELBO as\nL(y,\u00a2| a, 8) = \u2014Dez [a(9, z\\7, 4) ||p(@, 2|@)] + Eq\u00a2o,2|7,0) log p(w] z, 0, a, 8)\nThis form is intuitive. The first term attempts to match the variational posterior over latent variable\nto the prior on the latent variables, while the second term ensures that the variational posterior favor\nvalues of the latent variables that are good at explaining the data. By analogy to autoencoders, thi\nsecond term is referred to as a reconstruction term.\nWhat makes this method \u201cAutoencoding,\u201d and in fact the main difference from DMFVI, is the pa-\nrameterization of the variational distribution. In AEVB, the variational parameters are computed\nby using a neural network called an inference network that takes the observed data as input. For\nexample, if the model prior p(@) te) oon we might define the inference network as a feed-\nforward neural network (j.(w), v(w)) = f(w,), where (w) and v(w) are both vectors of length\nk, and \u00a5 are the network\u2019s woes. thee we might choose a Gaussian variational distribution\nq,(9) = N(0; u(w), diag(v(w))), where diag(- - -) produces a diagonal matrix from a column vec-\ntor. The Neus parameters \u00a5y can then be chosen by optimizing the ELBO (3). Note that we have\nN k\nr(wla, 8) = | (11 3 rlealen 8) (a) p(O lado\n\nn=12Zn\nsterior inference over the hidden variables 6 and z is intractable due to the coupling between tl\n\nand 3 under the multinomial assumption 1983).\nFor LDA, this optimization has closed form coordinate descent equations due to the conjugacy\nbetween the Dirichlet and multinomial distributions. Although this is a computationally conve-\nnient aspect of DMFVI, it also limits its flexibility. Applying DMFVI to new models relies on the\npractitioner\u2019s ability to derive the closed form updates, which can be impractical and sometimes\nimpossible.\nnow, unlike DMFVI, coupled the variational parameters for different documents because they are\nall computed from the same neural network. To compute the expectations with respect to g in\n[Kingma & Welling] (2014); [Rezende et al. (2014) use a Monte Carlo estimator which they call th\n\n\u201cyeparameterization trick\u201d (RT; appears eo in]Williams](1992)). In the RT, we define a variate U\nwith a simple distribution that is independent of all variational parameters, like a uniform or standarc\nnormal, and a reparameterization function F such that F(U, y) has distribution q,. This is always\npossible, as we could choose F' to be the inverse oh tative distribution function of q,, although we\nwill additionally want F' to be easy to compute and differentiable. If we can determine a suitable F\nthen we can approximate (3) by taking Monte Carlo samples of U, and optimize + using stochastic\ngradient descent.\nAlthough simple conceptually, applying AEVB to topic models raises several practical challenges\nThe first is the need to determine a reparameterization function for q(@) and q(z,,) to use the RT\nThe z,, are easily dealt with, but 6 is more difficult; if we choose q(@) to be Dirichlet, it is difficul\nto apply the RT, whereas if we choose q to be Gaussian or logistic normal, then the KL divergenc:\nin (3) becomes more problematic. The second issue is the well known problem of component col\nlapsing (Dinh & Dumoulin}/2016), which a type of bad local optimum that is particularly endemi\nto AEVB and similar methods. We describe our solutions to each of those problems in the next fev\nsubsections.\nwhere the distribution of w,|3,@ is Multinomial(1, 30), recalling that 6 denotes the matrix of all\ntopic-word probability vectors.\nLDA gets its name from the Dirichlet prior on the topic proportions 0, and the choice of Dirichlet\nprior is important to obtaining interpretable topics (Wallach et al.| [2009). But it is difficult to handle\nthe Dirichlet within AEVB because it is difficult to develop an effective reparameterization function\nfor the RT. Fortunately, a RT does exist for the Gaussian distribution and has been shown to perform\nquite well in the context of variational autoencoder (VAE) (Kingma & Welling} |2014).\nWe resolve this issue by constructing a Laplace approximation to the Dirichlet prior. Following\n, we do so in the softmax basis instead of the simplex. There are two benefits of this\ns ~ Dirichlet distributions are unimodal in the softmax basis with their modes coinciding\nwith the means of the transformed densities. Second, the softmax basis also allows for carrying\nout unconstrained optimization of the cost function without the simplex constraints. The Dirichlet\nprobability density function in this basis over the softmax variable h is given by\nPOo(h)la) = FPaeeY TT oath)\nHere 6 = o(h), where o(.) represents the softmax function. Recall that the Jacobian of o is pro-\nportional to [],, 9, and g(-) is an arbitrary density that ensures integrability by constraining the\nredundant degree of freedom. We use the Laplace approximation of |Hennig et al.| (2012), which\nDealing with discrete variables like z using reparameterization can be problematic, but fortunately\nn LDA the variable z can be conveniently summed out. By collapsing z we are left with having to\nsample from @ only, reducing (i) to\nN\n\np(wla, B) = | (11 p(wn|B, \u00bb) p(O|a)d0.\n\nn=1\nhas the property that the covariance matrix becomes diagonal for large k (number of topics). Thi:\napproximation to the Dirichlet prior p(6|q) is results in the distribution over the softmax variables\nh as a multivariate normal with mean wz, and covariance matrix 4, where"}, {"section_index": "3", "section_name": "3.3. VARIATIONAL OBJECTIVE", "section_text": "Now we can write the modified variational objective function. We use a logistic normal variationa\nlistribution over 6 with diagonal covariance. More precisely, we define two inference networks a\need forward neural networks f,, and fy with parameters 6; the output of each network is a vecto\nn R*. Then for a document w, we define (0) to be logistic normal with mean pp = f,.(w,6\nund diagonal covariance Uo = diag(fs(w,6)), where diag converts a column vector to a diagonz\nmatrix. Note that we can generate samples from g(@) by sampling \u20ac ~ N(0,J) and computin,\n_\u2014 ar, 1 yt/2,)\n|- (S{ereei'z0) + (fy = Ho)\u201d E71 (ty \u2014 My) \u2014 K + log zy eth)\n\n+Eex'(0.0) [wd log (2(B)o(Ho + Ba! \u201cI\nwhere \u00a9 represents the set of all the model and variational parameters and w, ... wp are the docu-\nments in the corpus. The first line in this equation arises from the KL divergence between the two\nlogistic normal distributions g and #. while the second line is the reconstruction error.\nIn order to impose the simplex constraint on the 6 matrix during the optimization, we apply the\nsoftmax transformation. That is, each topic 8; \u20ac RY is unconstrained, and the notation 7(8) means\nto apply the softmax function separately to each column of the matrix 3. Note that the mixture o!\nmultinomials for each word w,, a then be written as p(wn|8,) = [o()6],,,, which explains the\ndot product in (7). To optimize (7), we use stochastic gradient descent using Monte Carlo samples\nfrom e\u20ac, following the Law of the Unconscious Statistician.\n3.4 TRAINING AND PRACTICAL CONSIDERATIONS: DEALING WITH COMPONENT\nCOLLAPSING\nAEVB is prone to component collapsing (Dinh & Dumoulin| |2016), which is a particular type of\n\nlocal optimum very close to the prior belief, early on in the training. As the latent dimensionality\nof the model is increased, the KL regularization in the variational objective dominates, so that the\noutgoing decoder weights collapse for the components of the latent variable that reach close to the\nprior and do not show any posterior divergence. In our case, the collapsing specifically occurs\nbecause of the inclusion of the softmax transformation to produce 6. The result is that the k inferred\ntopics are identical as shown in table[7]\nWe were able to resolve this issue by tweaking the optimization. Specifically, we train the network\n\nwith the ADAM optimizer (Kingma & Ba\\|2015) using high moment weight (21) and learning rate\n\n(n). Through training at higher rates, early peaks in the functional space can be easily avoided. The\n1\nMik = log ax \u2014 K Ss log a;\ni\n\n1 2 1 1\nikke 1- + \u2014 .\nikk = ( x) da\nFinally, we approximate p(6|q) in the simplex basis with p(0|j11, D1) = CN (O|j41, D1) where LN\nis a logistic normal distribution with parameters j.;, 1. Although we approximate the Dirichlet\nprior in LDA with a logistic normal, this is not the same idea as a correlated topic model\nLafferty] (2006), because we use a diagonal covariance matrix. Rather, it is an approximation to\nstandard LDA.\nproblem is that momentum based training coupled with higher learning rate causes the optimizer to\ndiverge. While explicit gradient clipping helps to a certain extent, we found that batch normalization\n\n(loffe & Szegedy| does even better by smoothing out the functional space and hence curbing\n\nsudden divergence.\nFinally, we also found an increase in performance with dropout units when applied to 6 to force the\nnetwork to use more of its capacity.\nWhile more prominent in the AEVB framework, the collapsing can also occurs in DMFVI if the\n\nlearning offset (referred to as the 7 parameter 1999)) is not set properly. Interestingly, a\nsimilar learning offset or annealing based approach can also be used to down-weight the KL term in\n\nearly iterations of the training to avoid local optima."}, {"section_index": "4", "section_name": "4.1 MODEL", "section_text": "The PRODLDA model can be simply described as latent Dirichlet allocation where the word-level\nmixture over topics is carried out in natural parameter space, i.e. the topic matrix is not constrained\nto exist in a multinomial simplex prior to mixing. In other words, the only changes from LDA\nare that 6 is unnormalized, and that the conditional distribution of w, is defined as w,|6,0 ~\nMultinomial(1,0(G6)).\nThe connection to a product of experts is straightforward, as for the multinomial, a mixture of natural\nparameters corresponds to a weighted geometric average of the mean parameters. That is, consider\ntwo N dimensional multinomials parametrized by mean vectors p and q. Define the corresponding\nnatural parameters as p = o(r) and q = a(s), and let 6 \u20ac [0,1]. It is then easy to show that\nN\nP(x\\or + (l = 8)s) x x [] o(6ri + (1 8)si) ~ Tee st}\ni=l\nSo the PRODLDA model can be simply described as a product of experts, that is, p(w,/@.3) \u00ab\nT],p(wnlzn = k, B)%. PRODLDA is an instance of the exponential-family PCA (Collins et al.\nclass, and relates to the exponential-family harmoniums (Welling et al.|{2004) but with non-\n\nGaussian priors."}, {"section_index": "5", "section_name": "5 RELATED WORK", "section_text": "For an overview of topic modeling, see [Blei| (2012). There are several examples of topic mod-\nels based on neural networks and neural variational inference (Hinton & Salakhutdinov|\nLarochelle & Lauly| 2012} Mnih & Gregor| 2014} Miao et al.|/2016) but we are unaware of meth-\nods that apply AEVB generically to a topic model specified by an analyst, or even of a successful\napplication of AEVB to the most widely used topic model, latent Dirichlet allocation.\nRecently, (2016) introduced a closely related model called the Neural Variational Docu-\nment Model (NVDM). This method uses a latent Gaussian distribution over topics, like probabilistic\nlatent semantic indexing, and averages over topic-word distributions in the logit space. However,\nIn LDA, the distribution p(w]|6, 3) is a mixture of multinomials. A problem with this assumption\nis that it can never make any predictions that are sharper than the components that are being mixed\n(Hinton & Salakhutdinov} 2009). This can result in some topics appearing that are poor quality\nand do not correspond well with human judgment. One way to resolve this issue is to replace this\nword-level mixture with a weighted product of experts which by definition is capable of making\nsharper predictions than any of the constituent experts (Hinton) [2002). In this section we present a\nnovel topic model PRODLDA that replaces the mixture assumption at the word-level in LDA with\na weighted product of experts, resulting in a drastic improvement in topic coherence. This is a good\nillustration of the benefits of a black box inference method, like AVITM, to allow exploration of\nnew models.\nthey do not use either of the two key aspects of our work: explicitly approximating the Dirichlet\nprior using a Gaussian, or high-momentum training. In the experiments we show that these aspects\nlead to much improved training and much better topics.\nQualitative evaluation of topic models is a challenging task and consequently a large body of work\nhas developed automatic evaluation metrics that attempt to match human judgment of topic quality.\nTraditionally, perplexity has been used to measure the goodness-of-fit of the model but it has been\nrepeatedly shown that perplexity is not a good metric for qualitative evaluation of topics (Newman\n0). Several new metrics of topic coherence evaluation have thus been proposed; see\n2014) for a comparative review. [Lau et al.| (2014) showed that among all the competing\nmetrics, normalized pointwise mutual information (NPMI) between all the pairs of words in a set of\ntopics matches human judgment most closely, so we adopt it in this work. We also report perplexity,\nprimarily as a way of evaluating the capability of different optimizers. Following standard practice\n(2003), for variational methods we use the ELBO to calculate perplexity. For AEVB\nmethods. we calculate the ELBO using the same Monte Carlo approximation as for training.\nWe run experiments on both the 20 Newsgroups (11,000 training instances with 2000 word vocab:\nulary) and RCV/ Volume 2 ( 800K training instances with 10000 word vocabulary) datasets. Ow\npreprocessing involves tokenization, removal of some non UTF-8 characters for 20 Newsgroup:\nand English stop word removal. We first compare our AVITM inference method with the stan:\ndard online mean-field variational inference and collapsed Gibbs samplin;\n\n(Griffiths & Steyvers}|2004) on the LDA model. We use standard implementations of both meth\nods, scikit-\u2014learn for DMFVI and mallet (McCallum |2002) for collapsed Gibbs. Ther\nwe compare two autoencoding inference methods on three different topic models: standard LDA\n\nPRODLDA using our inference method and the Neural Variational Document Model (NVDM\n12016}\n\n(Miao et al.| , using the inference described in the paperf\u2019|\nTable 1: Average topic coherence on the 20 Newsgroups dataset. Higher is better\nTables[]and[2|show the average topic coherence values for all the models for two different settings o:\nk, the number of topics. Comparing the different inference methods for LDA, we find that, consisten\nwith previous work, collapsed Gibbs sampling yields better topics than mean-field methods. Among\nthe variational methods, we find that VAE-LDA model (AVITM) | yields similar topic coherence\nand perplexity to the standard DMFVI (although in some cases, VAE-LDA yields significantly bette:\ntopics). However, AVITM is significantly faster to train than DMFVI. It takes 46 seconds on 2(\nNewsgroup compared to 18 minutes for DMFVI. Whereas for a million document corpus of RCV!\nit only under 1.5 hours while scikit-learn\u2019s implementation of DMFVI failed to return any result:\neven after running for 24 hours]\nComparing the new topic models than LDA, it is clear that PRODLDA finds significantly better\ntopics than LDA, even when trained by collapsed Gibbs sampling. To verify this qualitatively, we\ndisplay examples of topics from all the models in Table{6] The topics from ProdLDA appear visually\nmore coherent than NVDM or LDA. Unfortunately, NVDM does not perform comparatively to LDA\nTable 2: Average topic coherence on the RCV1 dataset. Higher is better. Results not reported for\nLDA DMFVI, as inference failed to converge in 24 hours.\nTable 3: Perplexity scores for 20 Newsgroups. Lower is better.\nfor any value of k. To avoid any training dissimilarities we train all the competing models until we\nreach the perplexities that were reported in previous work. These are reported in Table\nA major benefit of AVITM inference is that it does not require running variational optimization,\nwhich can be costly, for new data. Rather, the inference network can be used to obtain topic pro-\nportions for new documents for new data points without running any optimization. We evaluate\nwhether this approximation is accurate, i.e. whether the neural network effectively learns to mimic\nprobabilistic inference. We verify this by training the model on the training set, then on the test set,\nholding the topics (8 matrix) fixed, and comparing the test perplexity if we obtain topic proportions\nby running the inference neural network directly, or by the standard method of variational optimiza-\ntion of the inference network on the test set. As shown in Table[4| the perplexity remains practically\nun-changed. The computational benefits of this are remarkable. On both the datasets, computing\nperplexity using the neural network takes well under a minute, while running the standard variational\napproximation takes ~ 3 minutes even on the smaller 20 Newsgroups data. Finally, we investigate\nthe reasons behind the improved topic coherence in PRODLDA. First, Table|5|explores the effects of\neach of our two main ideas separately. In this table, \u201cDirichlet\u201d means that the prior is the Laplace\napproximation to Dirichlet(a = 0.02), while \u201cGaussian\u201d indicates that we use a standard Gaussian\nas prior. \u2018High Learning Rate\u201d training means we use $1 > 0.8 and 0.1 > 7 > 0.0019] with batch\nnormalization, whereas \u201cLow Learning Rate\u201d means (1 > 0.8 and 0.0009 > 7 > 0.00009 without\nbatch normalization. (For both parameters, the precise value was chosen by Bayesian optimization.\nWe found that these values in the \u2019with BN\u201d cases were close to the default settings in the Adam\noptimizer.) We find that the high topic coherence that we achieve in this work is only possible if\nwe use both tricks together. In fact the high learning rates with momentum is required to avoid\nlocal minima in the beginning of the training and batch-normalization is required to be able to train\nthe network at these values without diverging. If trained at a lower momentum value or at a lower\nlearning rate PRODLDA shows component collapsing. Interestingly, if we choose a Gaussian prior,\nrather than the logistic normal approximation used in ProdLDA or NVLDA, the model is easier to\ntrain even with low learning rate without any momentum or batch normalization.\nThe main advantage of AVITM topic models as opposed to NVDM is that the Laplace approxima-\ntion allows us to match a specific Dirichlet prior of interest. As pointed out by{Wallach et al.|(2009).\nthe choice of Dirichlet hyperparameter is important to the topic quality of LDA. Following this rea-\nsoning, we hypothesize that AVITM topics are higher quality than those of NVDM because they\nare much more focused, i.e., apply to a more specific subset of documents of interest. We provide\nsupport for this hypothesis in Figure [I] by evaluating the sparsity of the posterior proportions over\ntopics, that is, how many of the model\u2019s topics are typically used to explain each document. In order\nto estimate the sparsity in topic proportions, we project samples from the Gaussian latent spaces of\nPRODLDA and NVDM in the simplex and average them across documents. We compare the topic\n>We note that much recent work follows |Hinton & Salakhutdinov) (2009) in reporting perplexity for the\nLDA Gibbs sampler on only a small subset o: ur results are different because we use the entire\ntest dataset.\n\u00b081 is the weight on the average of the gradients from the previous time step and 7 refers to the learnin\nate.\nsparsity for the standard Gaussian prior used by NVDM to the Laplace approximation of Dirichle\npriors with different hyperparameters. Clearly the Laplace approximation to the Dirichlet prior sig\nnificantly promotes sparsity, providing support for our hypothesis that preserving the Dirichlet prio\nexplains the the increased topic coherence in our method.\nlog P(topic proportions | document)\n\n40\n\n50\n\nStant\n\nGaussian+softmax\n\nt) D 100\nTopic Index\n\nDine!\nDiric!\nDinic!\n\n150\n\nlet with alpha=1/10\nlet with alpha=1/50\nlet with alpha=1/200\nFigure 1: Effect of prior assumptions on @ on sparsity of @ in neural topic models.\nTable 5: Average topic coherence for different choices of prior and optimization strategies o\nPRODLDA on 20 Newsgroup for k = 50.\nThe inference network architecture can be found in figure[2]in the appendix"}, {"section_index": "6", "section_name": "7 DISCUSSION AND FUTURE WORK", "section_text": "Table 4: Evaluation of inference network of VAE-LDA on 20 Newsgroups test set. \u201cInference\nnetwork only\u201d shows the test perplexity when the inference network is trained on the training set,\nbut no variational optimization is performed on the test set. \u201cInference Network + Optimization\u201d\nshows the standard approach of optimizing the ELBO on the test set. The neural network effectively\nlearns to approximate probabilistic inference effectively.\nWe present what is to our knowledge the first effective AEVB inference algorithm for latent Dirich-\nlet allocation. Although this combination may seem simple in principle, in practice this method is\ndifficult to train because of the Dirichlet prior and because of the component collapsing problem.\nBy addressing both of these problems, we presented a black-box inference method for topic models\nwith the notable advantage that the neural network allows computing topic proportions for new doc-\numents without the need to run any variational optimization. As an illustration of the advantages of\nProdLDA\nLDA\nCollapsed Gibbs\nNVDM\nTable 6: Five randomly selected topics from all the models.\nblack box inference techniques, we presented a new topic model, ProdLDA, which achieves signif.\nicantly better topics than LDA, while requiring a change of only one line of code from AVITM fot\nLDA. Our results suggest that AVITM inference is ready to take its place alongside mean field anc\ncollapsed Gibbs as one of the workhorse inference methods for topic models. Future work could\ninclude extending our inference methods to handle dynamic and correlated topic models."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "David Blei. Probabilistic topic models. Communications of the ACM, 55(4):77-84, 2012\nDavid M. Blei and John D. Lafferty. A correlated topic model of science. Annals of Applied\nStatistics, 1(1):17\u201435, 2007.\nModel\n\nTopics\n\nProdLDA\n\nmotherboard meg printer quadra hd windows processor vga mhz connector\narmenian genocide turks turkish muslim massacre turkey armenians armenia greek\nvoltage nec outlet circuit cable wiring wire panel motor install\n\nseason nhl team hockey playoff puck league flyers defensive player\n\nisrael israeli lebanese arab lebanon arabs civilian territory palestinian militia\n\nLDA\nNVLDA\n\ndb file output program line entry write bit int return\ndrive disk get card scsi use hard ide controller one\ngame team play win year player get think good make\nuse law state health file gun public issue control firearm\npeople say one think life make know god man see\n\nLDA\nDMFVI\n\nwrite article dod ride right go get night dealer like\n\ngun law use drug crime government court criminal firearm control\n\nlunar flyers hitter spacecraft power us existence god go mean\n\nstephanopoulos encrypt spacecraft ripem rsa cipher saturn violate lunar crypto\nfile program available server version include software entry ftp use\n\nLDA\nCollapsed Gibbs\n\nget right back light side like see take time one\n\nlist mail send post anonymous internet file information user message\nthanks please know anyone help look appreciate get need email\njesus church god law say christian one christ day come\n\nbike dod ride dog motorcycle write article bmw helmet get\n\nNVDM\n\nlight die burn body life inside mother tear kill christian\n\ninsurance drug different sport friend bank owner vancouver buy prayer\n\ninput package interface output tape offer component channel level model\n\nprice quadra hockey slot san playoff jose deal market dealer\n\nchristian church gateway catholic christianity homosexual resurrection modem mouse sunday\nTable 7: VAE-LDA fails to learn any meaningful topics when component collapsing occurs. The\ntable shows five randomly sampled topics (, which are essentially slight variants of each other) from\nwhen the VAE-LDA model is trained without BN and high momentum training.\nWe thank Andriy Mnih, Chris Dyer, Chris Russell, David Blei, Hannah Wallach, Max Welling,\nMirella Lapata and Yishu Miao for helpful comments, discussions and feedback.\nMichael Collins, Sanjoy Dasgupta, and Robert E Schapire. A generalization of principal compo.\nnent analysis to the exponential family. In Advances in Neural Information Processing Systems\nvolume 13. pp. 23. 2001.\nJames M Dickey. Multiple hypergeometric functions: Probabilistic interpretations and statistical\nuses. Journal of the American Statistical Association, 78(383):628\u2014637, 1983.\nLi Fei-Fei and Pietro Perona. A Bayesian hierarchical model for learning natural scene categories.\nIn JEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR\u201905)\nvolume 2, pp. 524-531. IEEE, 2005.\nThomas L Griffiths and Mark Steyvers. Finding scientific topics. Proceedings of the National\nacademy of Sciences, 101(suppl 1):5228-5235, 2004.\nPhilipp Hennig, David H Stern, Ralf Herbrich, and Thore Graepel. Kernel topic models. In AISTATS\npp. 511-519, 2012.\nGeoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neura\ncomputation, 14(8):1771\u20141800, 2002.\nGeoffrey E Hinton and Ruslan R Salakhutdinov. Replicated softmax: an undirected topic model. Ir\nAdvances in Neural Information Processine Systems. pp. 1607-1614. 2009.\nThomas Hofmann. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual in-\nternational ACM SIGIR conference on Research and development in information retrieval, pp.\n50-57. ACM, 1999.\nSergey loffe and Christian Szegedy. Batch normalization: Accelerating deep network training by\nreducing internal covariate shift. pp. 448-456, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. 3rd Internationa\nConference on Learning Representations (ICLR). 2015.\nAlp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei. Automatic\ndifferentiation variational inference. arXiv preprint arXiv: 1603.00788, 2016.\nJey Han Lau, David Newman, and Timothy Baldwin. Machine reading tea leaves: Automatically\nevaluating topic coherence and topic model quality. In EACL, pp. 530-539, 2014.\nDavid JC MacKay. Choice of basis for Laplace approximation. Machine learning, 33(1):77-86\n1998.\nYishu Miao, Lei Yu, and Phil Blunsom. Neural variational inference for text processing. pp. 1727-\n1736, 2016.\nPeter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine.\nNeural Computation. 7(5):889\u2014904. 1995.\naurent Dinh and Vincent Dumoulin. Training neural Bayesian nets. http://www.\niro.umontreal.ca/~bengioy/cifar/NCAP2014-summerschool/slides/\nLaurent_dinh_cifar_presentation.pdf, August 2016.\nMatthew Hoffman, Francis R Bach, and David M Blei. Online learning for latent dirichlet allocation.\nIn Advoaneece in Mouen] Infarmatinn Denroceing Cvetome nn 2E&K RGA NIN\nAndriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. pp.\n1791-1799, 2014.\nRajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. In AJSTATS\npp. 814-822, 2014.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and\napproximate inference in deep generative models. pp. 1278-1286, 2014.\nSimon Rogers, Mark Girolami, Colin Campbell, and Rainer Breitling. The latent process decom-\nposition of cdna microarray data sets. IEEE/ACM Transactions on Computational Biology and\nBioinformatics (TCBB), 2(2):143\u2014-156, 2005.\nHanna Wallach, David Mimno, and Andrew McCallum. Rethinking LDA: Why priors matter. In\nNIPS, 2009.\nMax Welling, Michal Rosen-Zvi, and Geoffrey E Hinton. Exponential family harmoniums with\nan application to information retrieval. In Advances in Neural Information Processing Systems,\nvolume 4, pp. 1481-1488, 2004.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement\nlearning. Machine Learning, 8(3-4):229-256, 1992.\nts\nL BN Layer |\nHy Butaver |b oo xk\nSigma\n\n} |\nf t\n\nMean\n\n|\n\nt } 100 x 100\n| FC Layer\n\n_Softplus | Input x 100\n_ FCLayer |\nFigure 2: Architecture of the inference network used in the experiments."}]
HJKkY35le
[{"section_index": "0", "section_name": "MODE REGULARIZED GENERATIVE ADVERSARIAL\nNETWORKS", "section_text": "TTong Chex\u2018 Yanran Li ':? Athul Paul Jacob, 'Yoshua Bengio, *Wenjie Li\nAlthough Generative Adversarial Networks achieve state-of-the-art results on a\nvariety of generative tasks, they are regarded as highly unstable and prone to miss\nmodes. We argue that these bad behaviors of GANs are due to the very particular\nfunctional shape of the trained discriminators in high dimensional spaces, which\ncan easily make training stuck or push probability mass in the wrong direction,\ntowards that of higher concentration than that of the data generating distribution.\n\nWe introduce several ways of regularizing the objective, which can dramatically\nstabilize the training of GAN models. We also show that our regularizers can\nhelp the fair distribution of probability mass across the modes of the data gener-\nating distribution, during the early phases of training and thus providing a unified\n\nealnutian ta the miccinga madec nrahlam"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Despite their success, GANs are generally considered as very hard to train due to training instability\nand sensitivity to hyper-parameters. On the other hand, a common failure pattern observed whil\ntraining GANs is the collapsing of large volumes of probability mass onto a few modes. Namely\nalthough the generators produce meaningful samples, these samples are often from just a few mode:\n(small regions of high probability under the data distribution). Behind this phenomenon is the miss\ning modes problem, which is widely conceived as a major problem for training GANs: many mode:\nof the data generating distribution are not at all represented in the generated samples, yielding :\nmuch lower entropy distribution, with less variety than the data generating distribution.\nThis issue has been the subject of several recent papers proposing several tricks and new archi-\ntectures to stabilize GAN\u2019s training and encourage its samples\u2019 diversity. However, we argue that a\ngeneral cause behind these problems is the lack of control on the discriminator during GAN training.\nWe would like to encourage the manifold of the samples produced by the generator to move towards\nthat of real data, using the discriminator as a metric. However, even if we train the discriminator\nto distinguish between these two manifolds, we have no control over the shape of the discriminator\nfunction in between these manifolds. In fact, the shape of the discriminator function in the data\n\u201cAuthors contributed equally.\n\u2018Tong Che; *Yanran Li; ': Athul Paul Jacob, 'Yoshua Bengio, * Wenjie Li\n\ntMontreal Institute for Learning Algorithms, Universit\u00e9 de Montr\u00e9al, Montr\u00e9al, QC H3T 1J4, Canada\nDepartment of Computing, The Hong Kong Polytechnic University, Hong Kong\n\n8David R. Cheriton School of Computer Science, University Of Waterloo, Waterloo, ON N2L 3G1, Canada\n{tong.che,ap.jacob,yoshua.bengio} @umontreal.ca\n\nSeev]licewili} @comn nolvy edu hk"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Generative adversarial networks (GAN) 2014) have demonstrated their potential\non various tasks, such as image generation, image super-resolution, 3D object generation, and video\nprediction (Radford et al. Ledig et al.|[2016} |Sgnderby et al.|{2016} [Nguyen et al.| [2016] [Wu]\n2016] (2015). The objective is to train a parametrized function (the generator)\n\nwhich maps noise samples (e.g., uniform or Gaussian) to samples whose distribution is close to that\nof the data generating distribution. The basic scheme of the GAN training procedure is to train\na discriminator which assigns higher probabilities to real data samples and lower probabilities to\ngenerated data samples, while simultaneously trying to move the generated samples towards the real\ndata manifold using the gradient information provided by the discriminator. In a typical setting, the\ngenerator and the discriminator are represented by deep neural networks.\nspace can be very non-linear with bad plateaus and wrong maxima and this can therefore hurt th\ntraining of GANs (Figure[I).\nFigure 1: Samples with very high discrim-\nination values (D=1.0) in DCGAN model\ntrained on CelebA dataset."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "In Goodfellow et al.| (2014), the GAN is able to generate interesting local structure but globall}\nincoherent images on various datasets. |Mirza & Osindero} (2014) enlarges GAN\u2019s representatior\n\ncapacity by introducing an extra vector to allow the generator to produce samples conditioned or\nother beneficial information. Motivated from this, several conditional variants of GAN has beer\napplied to a wide range of tasks, including image prediction from a normal map [Wang & Gupt\n\n(2016), image synthesis from text (2016) and edge map (2016), real-time\nimage manipulation (2016), temporal image generation [Zhou & Berg] (2016); Saito &\n\n2016).\n\n(2016), texture synthesis, style transfer, and video stylization|L:\nResearchers also aim at stretching GAN\u2019s limit to generate higher-resolution, photo-realistic images.\n[Denton et al.|(2015) initially apply a Laplacian pyramid framework on GAN to generate images of\nhigh resolution. At each level of their LAPGAN, both the generator and the discriminator are convo-\nlutional networks. As an alternative to LAPGAN, Radford et al.| successfully designs a class\nof deep convolutional generative adversarial networks which has led to significant improvements on\nunsupervised image representation learning. Another line of work aimed at improving GANs are\nthrough feature learning, including features from the latent space and image space. The motivation is\nthat features from different spaces are complementary for generating perceptual and natural-looking\nimages. With this perspective, some researchers use distances between learned features as losses for\ntraining objectives for generative models. (2015) combine a variational autoencoder\nobjective with a GAN and utilize the learned features from the discriminator in the GANs for better\nimage similarity metrics. It is shown that the learned distance from the discriminator is of great\nhelp for the sample visual fidelity. Recent literature have also shown impressive results on image\nsuper-resolution to infer photo-realistic natural images for 4x upscaling factors{Ledig et al.]\n\nSenderby et al.|(2016);|Nguyen et al. (2016).\nDespite these promising successes, GANs are notably hard to train. Although|Radford et al.|(2015}\nprovide a class of empirical architectural choices that are critical to stabilize GAN\u2019s training, it\n\nwould be even better to train GANs more robustly and systematically. [Salimans et al] 2019 (Salimans et al.](2016) pro-\npose feature matching technique to stabilize GAN\u2019s training. The generator is required to match the\n\nstatistics of intermediate features of the discriminator. Similar idea is adopted by|Zhao et al.](2016).\nTo remedy this problem, we propose a novel regu-\nlarizer for the GAN training target. The basic idea\nis simple yet powerful: in addition to the gradient\ninformation provided by the discriminator, we want\nthe generator to take advantage of other similarity\nmetrics with much more predictable behavior, such\nas the Lz norm. Differentiating these similarity met-\nrics will provide us with more stable gradients to\ntrain our generator. Combining this idea with an ap-\nproach meant to penalize the missing modes, we pro-\nlarizer for the GAN training target. The basic ide:\nis simple yet powerful: in addition to the gradien\ninformation provided by the discriminator, we wan\nthe generator to take advantage of other similarit\nmetrics with much more predictable behavior, suc!\n|: Samples with very high discrim- 4S the La norm. Differentiating these similarity met\nvalues (D=1.0) in DCGAN model T&S will provide us with more stable gradients t\nyn CelebA dataset. train our generator. Combining this idea with an ap\n\nproach meant to penalize the missing modes, we pro\namily of additional regularizers for the GAN objective. We then design a set of metrics t\nthe generated samples in terms of both the diversity of modes and the distribution fairnes\nrobability mass. These metrics are shown to be more robust in judging complex generativ\nincluding those which are well-trained and collapsed ones.\npose a family of additional regularizers for the GAN objective. We then design a set of metrics to\nevaluate the generated samples in terms of both the diversity of modes and the distribution fairness\nof the probability mass. These metrics are shown to be more robust in judging complex generative\nmodels, including those which are well-trained and collapsed ones.\nRegularizers usually bring a trade-off between model variance and bias. Our results have shown\nthat, when correctly applied, our regularizers can dramatically reduce model variance, stabilize the\ntraining, and fix the missing mode problem all at once, with positive or at the least no negative effects\non the generated samples. We also discuss a variant of the regularized GAN algorithm, which can\neven improve sample quality as compared to the DCGAN baseline.\nIn addition to feature distances, [Dosovitskiy & Brox|( found that the counterpart loss in image\n\nspace further improves GAN\u2019s training stability. Furthermore, some researchers make use of infor-\n\nmation in both spaces in a unified learning procedure (Dumoulin et al.|{2016} [Donahue et al.}/2016).\n\nInfDumoulin etal] (2016), one trains not just a generator but also an encoder, and the discriminator\ntrained to\n\nis distinguish between two joint distributions over image and latent spaces produced either\nby the application of the encoder on the training data or by the application of the generator (decoder)\nto the latent prior. This is in contrast with the regular GAN training, in which the discriminator only\nattempts to separate the distributions in the image space. Parallelly, Metz et al.| ) stabilize\nGANs by unrolling the optimization of discriminator, which can be considered as an orthogonal\nwork with ours.\nOur work is related to VAEGAN 2015) in terms of training an autoencoder or VAE\n\njointly with the GAN model. However, the variational autoencoder (VAE) in VAEGAN is used to\ngenerate samples whereas our autoencoder based losses serves as a regularizer to penalize missing\nmodes and thus improving GAN\u2019s training stability and sample qualities. We demonstrate detailed\ndifferences from various aspects in Appendix [D]"}, {"section_index": "4", "section_name": "3 + MODE REGULARIZERS FOR GANS", "section_text": "The GAN training procedure can be viewed as a non-cooperative two player game, in which the\ndiscriminator D tries to distinguish real and generated examples, while the generator G tries to fool\nthe discriminator by pushing the generated samples towards the direction of higher discrimination\nvalues. Training the discriminator D can be viewed as training an evaluation metric on the sample\nspace. Then the generator G' has to take advantage of the local gradient V log D(G) provided by the\ndiscriminator to improve itself, namely to move towards the data manifold.\nWe now take a closer look at the root cause of the instabilities while training GANs. The discrim\ninator is trained on both generated and real examples. As pointed out by Goodfellow et al.] (2014)\nDenton et al.|(2015);/Radford et al.|(2015), when the data manifold and the generation manifold ar\ndisjoint (which is true in almost all practical situations), it is equivalent to training a characteristi\nfunction to be very close to 1 on the data manifold, and 0 on the generation manifold. In order t\npass good gradient information to the generator, it is important that the trained discriminator pro\nduces stable and smooth gradients. However, since the discriminator objective does not directl;\ndepend on the behavior of the discriminator in other parts of the space, training can easily fail if th\nshape of the discriminator function is not as expected. As an example. note\u00a2\na common failure pattern for training GANs which is the vanishing gradient problem, in which th\ndiscriminator D perfectly classifies real and fake examples, such that around the fake examples, L\nis nearly zero. In such cases, the generator will receive no gradient to improve itself]\"]\nAnother important problem while training GANs is mode missing. In theory, if the generated dat:\nand the real data come from the same low dimensional manifold, the discriminator can help th\ngenerator distribute its probability mass, because the missing modes will not have near-0 probability\nunder the generator and so the samples in these areas can be appropriately concentrated toward:\nregions where D is closer to 1. However, in practice since the two manifolds are disjoint, D tend:\nto be near | on all the real data samples, so large modes usually have a much higher chance o\nattracting the gradient of discriminator. For a typical GAN model, since all modes have similar L\nvalues, there is no reason why the generator cannot collapse to just a few major modes. In othe\nwords, since the discriminator\u2019s output is nearly 0 and 1 on fake and real data respectively, th\ngenerator is not penalized for missing modes."}, {"section_index": "5", "section_name": "3.1 GEOMETRIC METRICS REGULARIZER", "section_text": "Compared with the objective for the GAN generator, the optimization targets for supervised learning\nare more stable from an optimization point of view. The difference is clear: the optimization targe\nfor the GAN generator is a learned discriminator. While in supervised models, the optimizatior\ntargets are distance functions with nice geometric properties. The latter usually provides muct\neasier training gradients than the former, especially at the early stages of training.\n'This problem exists even when we use log D(G(z)) as target for the generator, as noted by|Denton et al\n(2015) and our experiments.\nInspired by this observation, we propose to incorporate a supervised training signal as a regularize:\non top of the discriminator target. Assume the generator G(z) : Z \u2014> X generates samples by sam-\npling first from a fixed prior distribution in space Z followed by a deterministic trainable transforma-\ntion G into the sample space X. Together with G, we also jointly train an encoder E(x) : X > Z\nAssume d is some similarity metric in the data space, we add E,,~p,[d(a, Go E(a))] as a regularizer\nwhere pg is the data generating distribution. The encoder itself is trained by minimizing the same\nreconstruction error.\nIn practice, there are many options for the distance measure d. For instance, the pixel-wise L\u2019\ndistance, or the distance of learned features by the discriminator (Dumoulin et al.||2016) or by othe\nnetworks, such as a VGG classifier. (Ledig et al.|/2016\nThe geometric intuition for this regularizer is straight-forward. We are trying to move the generated\nmanifold to the real data manifold using gradient descent. In addition to the gradient provided by\nthe discriminator, we can also try to match the two manifolds by other geometric distances, say\nL* metric. The idea of adding an encoder is equivalent to first training a point to point mapping\nG(E()) between the two manifolds and then trying to minimize the expected distance between the\npoints on these two manifolds.\nIn addition to the metric regularizer, we propose a mode regularizer to further penalize miss\ning modes. In traditional GANs, the optimization target for the generator is the empirical sur\n>=; Vo log D(Go(z;:)). The missing mode problem is caused by the conjunction of two facts: (1.\nthe areas near missing modes are rarely visited by the generator, by definition, thus providing very\nfew examples to improve the generator around those areas, and (2) both missing modes and non.\nmissing modes tend to correspond to a high value of D, because the generator is not perfect sc\nthat the discriminator can take strong decisions locally and obtain a high value of D even nea\nnon-missing modes.\nFigure 2: Illustration of missing modes problem\nIn short, our regularized optimization target for the generator and the encoder becomes\nTg = \u2014E-, [log D(G(z))| + Exxp,[A1d(x, Go E(x)) + Az log D(G o E(x))]\nTp = Exgnp,[A1d(a, Go E(x)) + Az log D(G 0 E(z))]\nTa = -E, [log D(G(z))] + Exxp,[Aid(a, Go E(a)) + Az log D(G o E(x))|\nTr = Exyxp,[A1d(x, Go E(x)) + A2 log D(G o E(z))]\nAs an example, consider the situation in Fig-\nure [2] For most z, the gradient of the generator\nVo log D(Go(z)) pushes the generator towards\nthe major mode M;. Only when G(z) is very\nclose to the mode Mz can the generator get gra-\ndients to push itself towards the minor mode\nMy. However, it is possible that such z is of\nlow or zero probability in the prior distribution\nPo-\nGiven this observation, consider a regularize\nGAN model with the metric regularizer. A\nsume Mp is a minor mode of the data genera\ning distribution. For \u00ab \u20ac Mo, we know th\nif Go E is a good autoencoder, G(E(x)) wi\nbe located very close to mode Mo. Since the\nare sufficient training examples of mode Mo -\nthe training data, we add the mode regulariz\nissing modes problem. Ex~p, {log D(G o E(a))] to our optimizatic\n\ntarget for the generator, to encourage G(E(x\nnode of the data generating distribution. In this way, we can achieve fa\nyn across different modes."}, {"section_index": "6", "section_name": ".3. MANIFOLD-DIFFUSION TRAINING FOR REGULARIZED GANS", "section_text": "An example of manifold-diffusion training of GAN (MDGAN for short) is as follows: we train ;\ndiscriminator D, which separates between the samples x and Go E(x), for x from the data, and we\noptimize G with respect to the regularized GAN loss E[log D|(GoE(x))+Ad(x, GoE(zx))] in orde\nto match the two manifolds. In the diffusion step we train a discriminator D2 between distribution:\nG(z) and Go E(x), and we train G' to maximize log D2(G(z)). Since these two distributions ar\nnow nearly on the same low dimensional manifold, the discriminator D2 provides much smoothe\nand more stable gradients. The detailed training procedure is given in Appendix[A] See Figure|6]fo\nthe quality of generated samples."}, {"section_index": "7", "section_name": "3.4 EVALUATION METRICS FOR MODE MISSING", "section_text": "In order to estimate both the missing modes and the sample qualities in our experiments, we use\nseveral different metrics for different experiments instead of human annotators.\nexp (E, K L(p(y|x)||p* (y)))\nWhere x denotes one sample, p(y|z) is the softmax output of a trained classifier of the labels, anc\np*(y) is the overall label distribution of generated samples. The intuition behind this score is that\na strong classifier usually has a high confidence for good samples. However, the inception score is\nsometimes not a good metric for our purpose. Assume a generative model that collapse to a very bac\nimage. Although the model is very bad, it can have a perfect inception score, because p(y|x) car\nhave a high entropy and p*(y) can have a low entropy. So instead, for labelled datasets, we propose\nanother assessment for both visual quality and variety of samples, the MODE score:\nHowever, in datasets without labels (LSUN) or where the labels are not sufficient to characterize\nevery data mode (CelebA), the above metric does not work well. We instead train a third party\ndiscriminator between the real data and the generated data from the model. It is similar to the GAN\ndiscriminator but is not used to train the generator. We can view the output of the discriminator as\n\nan estimator for the quantity (See (Goodfellow et al.|[2014) for proof):\nWhere py is the probability density of the generator and py is the density of the data distribution\nTo prevent D* from learning a perfect 0-1 separation of p, and pa, we inject a zero-mean Gaussiat\nnoise to the inputs when training D*. After training, we test D* on the test set T of the real dataset\nIf for any test sample t \u20ac T, the discrimination value D(t) is close to 1, we can conclude that the\nmode corresponding to \u00a2 is missing. In this way, although we cannot measure exactly the numbe\nof modes that are missing, we have a good estimator of the total probability mass of all the missin;\nmodes.\nOn some large scale datasets, CelebA for example, the regularizers we have discussed do improve\nthe diversity of generated samples, but the quality of samples may not be as good without care-\nfully tuning the hyperparameters. Here we propose a new algorithm for training metric-regularized\nGANs, which is very stable and much easier to tune for producing good samples.\nThe proposed algorithm divides the training procedure of GANs into two steps: a manifold ster\nand a diffusion step. In the manifold step, we try to match the generation manifold and the real\ndata manifold with the help of an encoder and the geometric metric loss. In the diffusion step, we\ntry to distribute the probability mass on the generation manifold fairly according to the real date\ndistribution.\nThe inception score (Salimans et al.|/2016) was considered as a good assessment for sample quality\n\nfrom a lahelled datacet:\nexp (E, K L(p(y|x)||p(y)) \u2014 KL(p* (y)||p(y)))\nwhere p(y) is the distribution of labels in the training data. According to our human evaluation\nexperiences, the MODE score successfully measures two important aspects of generative models,\ni.e., variety and visual quality, in one metric."}, {"section_index": "8", "section_name": "4.1 MNIST", "section_text": "We perform two classes of experiments on MNIST.\nFor the MNIST dataset, we can assume that the data\ngenerating distribution can be approximated with ten\ndominant modes, if we define the term \u201c\u2018mode\u201d here as\na connected component of the data manifold."}, {"section_index": "9", "section_name": "4.1.1 GRID SEARCH FOR MNIST GAN MODELS", "section_text": "SRE MEME BY OJ OU) VApIViy tie View vie SO ine =\nposed regularizers on GAN models in terms of im- optimD [SGD,Adam]\nproving stability and sample quality, we use a large Ir [le-2,le-3,le-4]\n\nscale grid search of different GAN hyper-parameters\non the MNIST dataset. The grid search is based on a\npair of randomly selected loss weights: A; = 0.2 and\nA2 = 0.4. We use the same hyper-parameter settings for both GAN and Regularized C\nlist the search ranges in Table[]| Our grid search is similar to those proposed in|Zhao et a\nPlease refer to it for detailed explanations regarding these hyper-parameters.\n\u2018or evaluation, we first train a 4-layer CNN classifier on the MNIST digits, and then apply it t\n\u2018ompute the MODE scores for the generated samples from all these models. The resulting distribu\nion of MODE score is shown in Figure [3] Clearly, our proposed regularizer significantly improve:\nhe MODE scores and thus demonstrates its benefits on stabilizing GANs and improving sampl\njualities.\n59.97\n\n= GAN\nco sem Regularized GAN\n40\n30\n22.29\n\n20 1734\n\na5\n* 619 45,619\n\na $2 135227 onli 03\n005 O51 12 23 67 78\nFigure 3: The distributions of MODE scores for GAN and regularized GAN.\nTo illustrate the effect of regularizers with different coefficients, we randomly pick an architecture\nand train it with different \\; = A2. The results are shown in Figure/4]\nFigure 4: (Left 1-5) Different hyperparameters for MNIST generation. The values of the A; anc\nAg in our Regularized GAN are listed below the corresponding samples. (Right 6-7) Best samples\nthrough grid search for GAN and Regularized GAN."}, {"section_index": "10", "section_name": "4.1.2 COMPOSITIONAL MNIST DATA WITH 1000 MODES", "section_text": "In order to quantitatively study the effect of our regularizers on the missing modes, we concatenate\nthree MNIST digits to a number in [0,999] in a single 64x64 image, and then train DCGAN as\na baseline model on the 1000 modes dataset. The digits on the image are sampled with different\nTable 1: Grid Search for Hyperparameters.\nnLayerG\nnLayerD\nsizeG\nsizeD\ndropoutD\noptimG\noptimD\nIr\n\n[2,3,4]\n\n[2,3,4]\n\n[400,800, 1600,3200]\n(256, 512, 1024]\n[True,False]\n[SGD,Adam]\n[SGD,Adam]\n[le-2,1le-3,le-4]\n59.97\n\n70 om GAN\nsem Regularized GAN\n\n60\n\n22.29\n20 aaa\n14.86\n\n6.19 6.19 743\ns02 ae |\n3. ee osif _31\n\nas\n\n00.5 05-1 12 23 34 45 56 oT Te\nprobabilities, in order to test the model\u2019s capability to preserve small modes in generation. We again\nuse a pre-trained classifier for MNIST instead of a human to evaluate the models.\nThe performances on the compositional experiment are measured by two metrics. #Miss represents\nthe classifier-reported number of missing modes, which is the size of the set of numbers that the\nmodel never generates. KL stands for the KL divergence between the classifier-reported distribution\nof generated numbers and the distribution of numbers in the training data (as for the Inception\nscore). The results are shown in Table[2] With the help of our proposed regularizer, both the numbet\nof missing modes and KL divergence drop dramatically among all the sets of the compositional\nMNIST dataset, which again proves the effectiveness of our regularizer for preventing the missing\nmodes problem."}, {"section_index": "11", "section_name": "4.2 CELEBA", "section_text": "To test the effectiveness of our proposal on harder problems, we implement an encoder for th\u00e9\nDCGAN algorithm and train our model with different hyper-parameters together with the DCGAD\nbaseline on the CelebA dataset. We provide the detailed architecture of our regularized DCGAN it\nAppendix |B]"}, {"section_index": "12", "section_name": "4.2.1 MISSING MODES ESTIMATION ON CELEBA", "section_text": "We also employ a third party discriminator trained with injected noise as a metric for missing mode\nestimation. To implement this, we add noise in the input layer in the discriminator network. For each\nGAN model to be estimated, we independently train this noisy discriminator, as mode estimator.\nwith the same architecture and hyper-parameters on the generated data and the training data. We\nthen apply the mode estimator to the test data. The images which have high mode estimator outputs\ncan be viewed as on the missing modes.\na | DCGAN (100) DCGAN (200) Reg-GAN (100) Reg-GAN (200) MDGAN (200)\n\n3.5 | 5463 17089 754 3644 74\n\n4.0 | 590 15832 42 391 13\nThe comparison result is shown in Table B] Both our proposed Regularized-GAN and MDGAN\noutperform baseline DCGAN models on all settings. Especially, MDGAN suppresses other models\nshowing its superiority on modes preserving. We also find that, although sharing the same architec-\nture, the DCGAN with 200-dimensional noise performs quite worse than that with 100-dimensiona\nnoise as input. On the contrary, our regularized GAN performs more consistently.\nTo get a better understanding of the models\u2019 performance, we want to figure out when and where\nthese models miss the modes. Visualizing the test images associated with missed modes is instruc-\ntive. In Figure}5| the left three images are missed by all models. It is rare to see in the training data\nthe cap in the second image and the type of background in the third, which thus can be viewed as\nsmall modes under this situation. These three images should be considered as the hardest test data\nTable 2: Results for Compositional MNIST with 1000 modes. The proposed regularization (Reg-\nDCGAN) allows to substantially reduce the number of missed modes as well as the KL divergence\nthat measures the plausibility of the generated samples (like in the Inception score).\nSet 1 Set 2 Set 3 Set 4\n#Miss KL #Miss KL #Miss KL #Miss KL\n\nDCGAN | 204.7 77.9 204.3. 60.2 103.4 75.9 89.3 77.8\n\nReg-DCGAN | 32.1 62.3 71.5 58.9 42.7 684 31.6 67.8\nTable 3: Number of images on the missing modes on CelebA estimated by a third-party discrimina-\ntor. The numbers in the brackets indicate the dimension of prior z. a denotes the standard deviation\nof the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN\nachieves a very high reduction in the number of missing modes, in comparison to other methods .\nFigure 5: Test set images that are on missing mode. Left: Both MDGAN and DCGAN missing.\nRight: Only DCGAN missing.\nAfter quantitative evaluation, we manually examine the generated samples by our regularized GAN\nto see whether the proposed regularizer has side-effects on sample quality. We compare our mode!\nwith ALI 20I6), VAEGAN 2013), and DCGAN\nin terms of sample visual quality and mode diversity. Samples generated from these models\nare shown in Figure]\nFigure 6: Samples generated from different generative models. For each compared model, we\ndirectly take ten decent samples reported in their corresponding papers and code repositories. Note\nhow MDGAN samples are both globally more coherent and locally have sharp textures.\nBoth MDGAN and Regularized-GAN generate clear and natural-looking face images. Although\nALI\u2019s samples are plausible, they are sightly deformed in comparison with those from MDGAN.\nThe samples from VAEGAN and DCGAN seem globally less coherent and locally less sharp.\nAs to sample quality, it is worth noting that the samples from MDGAN enjoy fewer distortions\nWith all four other models, the majority of generated samples suffer from some sort of distortion\nHowever, for the samples generated by MDGAN, the level of distortion is lower compared with the\nother four compared models. We attribute it to the help of the autoencoder as the regularizer to alte:\nthe generation manifolds. In this way, the generator is able to learn fine-grained details such as face\nedges. As a result, MDGAN is able to reduce distortions.\n>For fair comparison, we also recommend readers to refer to the original papers |Dumoulin et al.|(\nal.\nhttps://github.com/IshmaelBelghazi/ALI/blob/master/paper/celeba\n\nples are from\nsamples .pngj|and we reverte em to the original 64x64 size. ie D AN samples are from|https:\n\nTn eee eum denon ede\nfor GAN to learn. Nonetheless, our best model, MDGAN still capture certain small modes. The\nseven images on the right in Figure [5] are only missed by DCGAN. The sideface, paleface, black,\nand the berets are special attributes among these images, but our proposed MDGAN performs well\non all of them.\n\u201cSRES Sah oak\n\nSBNSOEB6EER\n\nVAEGAN f- } \u00e9-: 7\n\nBAR\noon i 9) BS) a Be SS\nFigure 7: Sideface samples generated by Regularized-GAN and MDGAN\nIn terms of missing modes problem, we instructed five individuals to conduct human evaluation on\nthe generated samples. They achieve consensus that MDGAN wins in terms of mode diversities.\nTwo people pointed out that MDGAN generates a larger amount of samples with side faces than\nother models. We select several of these side face samples in Figure[7| Clearly, our samples maintain\nacceptable visual fidelity meanwhile share diverse modes. Combined with the above quantitative\nresults, it is convincing that our regularizers bring benefits for both training stability and mode\nvariety without the loss of sample quality."}, {"section_index": "13", "section_name": "5 CONCLUSIONS", "section_text": "Although GANs achieve state-of-the-art results on a large variety of unsupervised learning tasks,\ntraining them is considered highly unstable, very difficult and sensitive to hyper-parameters, all the\nwhile, missing modes from the data distribution or even collapsing large amounts of probability\nmass on some modes. Successful GAN training usually requires large amounts of human and com-\nputing efforts to fine tune the hyper-parameters, in order to stabilize training and avoid collapsing.\nResearchers usually rely on their own experience and published tricks and hyper-parameters instead\nof systematic methods for training GANs."}, {"section_index": "14", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Naiyan Wang, Jianbo Ye, Yuchen Ding, Saboya Yang for their GPU support. We also want\nto thank Huiling Zhen for helpful discussions, Junbo Zhao for providing the details of grid search\nexperiments on the EBGAN model, as well as Anders Boesen Lindbo Larsen for kindly helping us\non running VAEGAN experiments. We appreciate for the valuable suggestions and comments from\nthe anonymous reviewers. The work described in this paper was partially supported by NSERC.\nCalcul Quebec, Compute Canada, the Canada Research Chairs, CIFAR, National Natural Science\nFoundation of China (61672445 and 61272291), Research Grants Council of Hong Kong (PolyU\n152094/14E), and The Hong Kong Polytechnic University (G-YBP6)."}, {"section_index": "15", "section_name": "REFERENCES", "section_text": "Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a\nlaplacian pyramid of adversarial networks. In Advances in neural information processing systems,\n\npp. 1486-1494, 2015.\ntegularized\nWe provide systematic ways to measure and avoid the missing modes problem and stabilize training\nwith the proposed autoencoder-based regularizers. The key idea is that some geometric metrics can\nprovide more stable gradients than trained discriminators, and when combined with the encoder.\nthey can be used as regularizers for training. These regularizers can also penalize missing modes\nand encourage a fair distribution of probability mass on the generation manifold.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair.\nAaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-\nmation Processing Systems, pp. 2672-2680, 2014.\nPhillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with\nconditional adversarial networks. arxiv, 2016.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with dee\nconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nScott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee\nGenerative adversarial text to image synthesis. arXiv preprint arXiv: 1605.05396, 2016.\nMasaki Saito and Eiichi Matsumoto. Temporal generative adversarial nets. arXiv preprint\narXiv: 1611.06624, 2016.\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen\nImproved techniques for training gans. arXiv preprint arXiv: 1606.03498, 2016.\nCasper Kaae Sgnderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszar. Amortisec\nmap inference for image super-resolution. arXiv preprint arXiv: 1610.04490. 2016.\nXiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversar-\njal networks. In ECCV, 2016.\nJiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learning\na probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Neural\nInformation Processing Systems (NIPS), 2016.\nJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network\narXiv preprint arXiv: 1609.03126, 2016.\nYipin Zhou and Tamara L Berg. Learning temporal transformations from time-lapse videos. Ir\nEuropean Conference on Computer Vision, pp. 262\u2014277. Springer, 2016.\nJun-Yan Zhu, Philipp Kri\u00e9henbiihl, Eli Shechtman, and Alexei A. Efros. Generative visual manipula.\ntion on the natural image manifold. In Proceedings of European Conference on Computer Visior\n(ECCV), 2016.\nAnders Boesen Lindbo Larsen, Sgren Kaae Sgnderby, Hugo Larochelle, and Ole Winther. Autoen-\ncoding beyond pixels using a learned similarity metric. arXiv preprint arXiv: 1512.09300, 2015."}, {"section_index": "16", "section_name": "\\ APPENDIX: PSEUDO CODE FOR MDGAN", "section_text": "7. Update generator G using SGD with gradient ascent:\nFigure 8: The detailed training procedure of ana MDGAN example.\nOne has to pay particular attention to batch normalization layers. In DCGAN, there are batch nor\nmalization layers both in the generator and the discriminator. However, two classes of data gc\nthrough the batch normalization layers in the generator. One come from sampled noise z, the othe:\none come from the encoder. In our implementation, we separate the batch statistics for these twc\nclasses of data in the generator, while keeping the parameters of BN layer to be shared. In this way\nthe batch statistics of these two kinds of batches cannot interfere with each other.\nThe data is sampled from a mixture of 6 Gaussians, with standard derivation of 0.1. The means of\nthe Gaussians are placed around a circle with radius 5. The generator network has two ReLU hidden\nlayers with 128 neurons. It generates 2D output samples from 3D uniform noise from [0,1]. The\ndiscriminator consists of only one fully connected layer of ReLU neurons, mapping the 2D input to\nIn this Appendix, we give the detailed training procedure of an MDGAN example we discuss in\nSectionB.3]\nm\n\nVoy = > [log Di (xi) + log(1 \u2014 Di (G(E(%:))))\n\ni=l\nVo, ~ DA log Di (G(EGx:))) = [ee \u2014 G(E(&:)) |?\n\ni=l\nm\n\nVo3\u2014 ) [log Da(G(E(x:))) + log( ~ Da(z.))\n\ni=1\nm\n\nVo,\u2014 log Da(Ge.))\n\ni=l\nWe use similar architectures for Compositional MNIST and CelebA experiments. The architecture\n\nis based on that found in DCGAN Radford et al.|(2015). Apart from the discriminator and generator\n\nwhich are the same as DCGAN, we add an encoder which is the \u201cinverse\u201d of the generator, by\nreversing the order of layers and replacing the de-convolutional layers with convolutional layers.\nFigure 9: Comparison results on a toy 2D mixture of Gaussians dataset. The columns on the left\nshows heatmaps of the generator distributions as the number of training epochs increases, whereas\nthe rightmost column presents the target, the original data distribution. The top row shows standard\nGAN result. The generator has a hard time oscillating among the modes of the data distribution, and\nis only able to \u201crecover\u201d a single data mode at once. In contrast, the bottom row shows results of our\nregularized GAN. Its generator quickly captures the underlying multiple modes and fits the target\ndistribution."}, {"section_index": "17", "section_name": "D APPENDIX: COMPARISON WITH VAEGAN", "section_text": "In this appendix section, we demonstrate the effectiveness and OTS) ae terme of mode-regularized\nGANs proposed in this paper as compared to [Larsen et al, [Larsen et al.| (2015) in terms of its theoretical dif-\nference. sample quality and number of missing modes.\nThe first assumption does not necessarily hold for GANs. We have found that in some trainec\nmodels of DCGANs, the real posterior p(z|z) is even not guaranteed to have only one mode, not tc\nmention it is anything close to factorized Gaussian. We believe that this difference in probabilistic\nframework is an essential obstacle when one tries to use the objective of VAEGAN as a regularizer\nHowever, in our algorithm, where we use a plain auto-encoder instead of VAE as the objective. Plair\nauto-encooders works better than VAE for our purposes because as long as the model G(z) is able\nto generate training samples, there always exists a function E*(a) such that G(E(x)) = x. Ow\nencoder can therefore be viewed as being trained to approximate this real encoder E*. There are\nno conflicts between a good GAN generator and our regularization objective. Hence, our objectives\ncan be used as regularizers for encoding the prior knowledge that good models should be able tc\ngenerate the training samples. This is why our work is essentially different from VAEGAN. In ou\nexperiments, we also believe that this is the reason why VAEGAN generates worse samples than <\ncarefully tuned regularized GANs.\nIn terms of sample quality and missing modes, we run the official code of VAEGAN P| with thei\ndefault setting. We train VAEGAN for 30 epochsJand our models for only 20 epochs. For fairness\n1 real 1D number. Both networks are optimized with the Adam optimizer with the learning rate of\nle-4.\nIn the regularized version, we choose Ay = Az = 0.005. The comparison between the generator\ndistribution from standard GAN and our proposed regularized GAN are shown in Figure|9]\nGAN . - .\n\n\u201c . .\nsoe . \\ so. to. *\nReg-GAN ' \u2018 \u201coe i . \u00b0\npon 4 os so.\n- . @ *. . .\n\nEpoch 1 Epoch 200 Epoch 400 Epoch 600 Epoch 800 Epoch 1000 Target\nGAN . = .\n\n\u201c *\nsoe . \\ so. to. *\neg-GAN ' . ; Gf \u201coe i . \u00b0\njos 4 os 2.\n- . @ *. . .\n\nEpoch 1 Epoch 200 Epoch 400 Epoch 600 Epoch 800 Epoch 1000 Target\nWith regard to the theoretical difference, the optimization of VAEGAN relies on the probabilistic\nvariational bound, namely p(a) > Eq(z\\2) [log p(x|z)] \u2014 KL(q(z|x)||p(z)). This variational bound\ntogether with a GAN loss is optimized with several assumptions imposed in VAEGAN:\n1. In general, VAE is based on the assumption that the true posterior p(z|x) can be well\napproximated by factorized Gaussian distribution q.\n\n2. As to VAEGAN, It is also assumed that the maximum likelihood objectives does not con-\nflict with GAN objective in terms of probabilistic framework.\nThe generated samples are shown in Figur The most obvious difference between our sample:\nand VAEGAN\u2019s samples is the face distortion, which is consistent with our experimental results 11\nSection We conjecture that the distortions of VAEGAN\u2019s samples are due to the conflicts be\ntween the two objectives, as we present above. In other words, the way we introduce auto-encoder:\nas regularizers for GAN models is different from VAEGAN\u2019s. The difference is that the second as\nsumption mentioned above is not required in our approaches. In our framework, the auto-encoder:\nhelps alter the generation manifolds, leading to fewer distortions in fine-grained details in our gen\nerated samples.\nIn terms of the missing modes problem, we use the same method described in Section\ncomputing the number of images with missing modes. The results are shown below.\nWe see that using our proposed regularizers results in a huge drop in the number of missing modes\nWe conjecture that the reason why VAEGAN performs very bad in our metric for missing modes i:\nbecause the samples generated are of low quality, so the discriminator classifies the samples as \u201cno\non mode\u201d. Namely, the data generated is too far away from many real data modes. Essentially if <\nmodel generates very bad samples, we can say that the model misses all or most modes.\nTo conduct more fair evaluation between VAEGAN and our methods, we also perform a blind human\nevaluation. Again we instructed five individuals to conduct this evaluation of sample variability.\nWithout telling them which is generated by VAEGAN and which is generated by our methods, fout\npeople agree that our method wins in terms of sample diversity. One person thinks the samples are\nequally diverse.\nIn conclusion, we demonstrate that our proposed mode-regularized GANs, i.e., Reg-GAN and\nMDGAN, are different from VAEGAN theoretically as discussed above. Such differences empiri-\ncally result in better sample quality and mode preserving ability, which are our main contributions.\nFigure 10: Samples generated by our models and VAEGAN. The third line are samples generated by\nour self-trained VAEGAN model, with default settings. The last line are generated samples reported\nin the original VAEGAN paper. We depict both of them here for a fair comparison.\nTable 4: Number of images on the missing modes on CelebA estimated by a third-party discrimina-\ntor. The numbers in the brackets indicate the dimension of prior z. 7 denotes the standard deviation\nof the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN\nachieves a very high reduction in the number of missing modes, in comparison to VAEGAN.\na | VAEGAN (100) Reg-GAN (100) Reg-GAN (200) _MDGAN (200)\n\n3.5 | 9720 754 3644 74\n\n4.0 | 5862 42 391 13"}]
Hy-lMNqex
[{"section_index": "0", "section_name": "TARTAN: ACCELERATING FULLY-CONNECTED AND\nCONVOLUTIONAL LAYERS IN DEEP LEARNING NET-\nWORKS BY EXPLOITING NUMERICAL PRECISION\nVARIABILITY", "section_text": "Alberto Delmas Lascorz, Sayeh Sharify, Patrick Judd & Andreas Moshovos\n{delmaslli, sayeh, judd, moshovos}@ece -utoronto.ca\nTartan TRT a hardware accelerator for inference with Deep Neural Networks\n(DNNs) is presented and evaluated on Convolutional Neural Networks. TRT ex-\nploits the variable per layer precision requirements of DNNs to deliver execution\ntime that is proportional to the precision p in bits used per layer for convolutional\nand fully-connected layers. Prior art has demonstrated an accelerator with the\nsame execution performance only for convolutional layer{Judd et al.] (2016ajc).\nExperiments on image classification CNNs show that on average across all net-\nworks studied, TRT outperforms a state-of-the-art bit-parallel accelerator\n\nby 1.90 without any loss in accuracy while it is 1.17x more en-\nergy efficient. TRT requires no network retraining while it enables trading off\naccuracy for additional improvements in execution performance and energy effi-\nciency. For example, if a 1% relative loss in accuracy is acceptable, TRT is on\naverage 2.04 faster and 1.25x more energy efficient than a conventional bit-\nparallel accelerator. A Tartan configuration that processes 2-bits at time, requires\nless area than the 1-bit configuration, improves efficiency to 1.24 over the bit-\nparallel baseline while being 73% faster for convolutional layers and 60% faster\nfor fully-connected layers is also presented."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "It is only recently that commodity computing hardware in the form of graphics processors delivered\nthe performance necessary for practical, large scale Deep Neural Network applications|Krizhevsky\n(2012). At the same time, the end of Dennard Scaling in semiconductor technology |Es-\nmakes it difficult to deliver further advances in hardware performance\nusing existing general purpose designs. It seems that further advances in DNN sophistication would\nhave to rely mostly on algorithmic and in general innovations at the software level which can be\nhelped by innovations in hardware design. Accordingly, hardware DNN accelerators have emerged.\nThe DianNao accelerator family was the first to use a wide single-instruction single-data (SISD)\narchitecture to process up to 4K operations in parallel on a single chip (2014a]b) out-\nperforming graphics processors by two orders of magnitude. Development in hardware accelerators\nhas since proceeded in two directions: either toward more general purpose accelerators that can\nsupport more machine learning algorithms while keeping performance mostly on par with DaDian-\n\nNao (DaDN) {Chen et al.| (20146), or toward further specialization of specific layers or classes of\nDNNs with the goal of outperforming DaDN in execution time and/or energy efficiency, e.g.,|Han\n(2016); (2016a); (2016a); (Chen, Yu-Hsin and Krishna, Tushar and\n(2016);/Reagen et al.](2016). This work is along the second direction.\n\nection|5|reviews several other accelerator designs.\nWhile DaDN\u2019s functional units process 16-bit fixed-point values, DNNs exhibit varying precision\nrequirements across and within layers, e.g.,/Judd et al.| (2015). Accordingly, it is possible to use"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "shorter, per layer representations for activations and/or weights. However, with existing bit-parallel\nfunctional units doing so does not translate into a performance nor an energy advantage as the values\nare expanded into the native hardware precision inside the unit.\nThis work presents Tartan (TRT), a massively parallel hardware accelerator whose execution time\nfor fully-connected and convolutional layers scales with the precision p used to represent the input\nvalues. TRT uses hybrid bit-serial/bit-parallel functional units and exploits the abundant parallelism\nof typical DNN layers with the goal of exceeding DaDN\u2019s execution time performance and energy\nefficiency. Ideally Tartan can improve execution time by 16 5 where p is the precision used for the\nactivations in convolutional layers, and for the activations \u201cand weights in fully-connected layers.\nEvery bit of precision that can be eliminated ideally reduces execution time and increases energy\nefficiency. TRT builds upon the Stripes (STR) accelerator (Judd et al.| which improves\nexecution time and energy efficiency only for convolutional layers.\nThis work evaluates TRT on a set of convolutional neural networks (CNNs) for image classification.\nOn average TRT reduces inference time by 1.61x, 1.91x and 1.90 over DaDN for the fully-\nconnected, the convolutional, and all layers respectively. Energy efficiency compared to DaDN with\nTRT is 0.92, 1.18x and 1.17x respectively. TRT enables trading off accuracy for improving exe-\ncution time and energy efficiency. For example, on average for the fully-connected layers, accepting\n\na 1% loss in accuracy improves performance to 1.73x and energy efficiency to 1.00 compared to\nDaDN.\nThe rest of this document is organized as follows: Section [2] illustrates the key concepts behind\nTRT via an example. Section [3] reviews the DaDN architecture and presents an equivalent Tartan\nconfiguration. Section [4] presents the experimental results. Section [5] reviews related work and\ndiscusses the limitations of this study and the potential challenges with TRT. Section|6|concludes.\nThis section illustrates at a high-level the TRT design by showing how it would process two pur-\nposely trivial cases: 1) a fully-connected layer (FCL) with a single input activation producing two\noutput activations, and 2) a convolutional layer (CVL) with two input activations and one single-\nweight filter producing two output activations. The per layer calculations are:\n\u201cully \u2014 Connected :\nfi=wixa\n\nfo =wWo Xa\nWhere fi, f2, c1 and cp are output activations, w1, w2, and w are weights, and aj, a2 and a are inpu\nactivations. For clarity all values are assumed to be represented in 2 bits of precision.\nshows a bit-parallel processing engine representative of DaDN. Every cycle, the engine\ncan calculate the product of two 2-bit inputs, i (weight) and v (activation) and accumulate or store\nit into the output register OR. Parts (b) and (c) of the figure show how this unit can calculate the\nexample CVL over two cycles. In part (b) and during cycle 0, the unit accepts along the v input bit:\n0 and 1 of a; (noted as a1/0 and a1/1 respectively on the figure), and along i bits 0 and 1 of u\nand produces both bits of output c;. Similarly, during cycle 1 (part (c)), the unit processes a2 and u\nto produce cy. In total, over two cycles, the engine produced two 2b x 2b products. Processing the\nexample FCL also takes two cycles: In the first cycle w; and a produce f1, and in the second cyck\nwa and a produce f\u00bb. This process is not shown in the interest of space."}, {"section_index": "3", "section_name": "2.2 Tartan\u2019S APPROACH", "section_text": "Figure [2] shows how a TRT-like engine would process the example CVL. Figure [2a] shows the en\ngine\u2019s structure which comprises two subunits. The two subunits accept each one bit of an activatior\nper cycle through inputs v0 and v1 respectively and as before, there is a common 2-bit weight inpu\n(i1, 70). In total, the number of input bits is 4, identical to the bit-parallel engine.\nFully \u2014 Connected : Convolutional :\nfi=wixa cy = wx ay,\nfo =wWo Xa Co = WX ag\nOR\n\n(a)\n\na1/o| [-\n\nOR\n\n\u00a5\n\n2/0} [_\n\nOR\n\n>|\n\nW/o\n\n>|\n\nx\nFigure 1: Bit-Parallel Engine processing the convolutional layer over two cycles: a) Structure, b)\nCycle 0, and c) Cycle 1.\nvi\n\nvo\nAR oR oR] \u00a33 BR gn AR BR OR AR BR oR\n+\n=I a | x \u00a9 . ,\ni | Be rL \u201c wo\nL | wil\n\u2014 \u2014 wid ee\n(a) Engine Structure (b) Cycle 1: Parallel Load w on BR:\nai/o 2/0 al/1) a2/1\nAR BR OR AR BR OR AR BR OR AR BR OR\nbo] wa bo] wr bo} wit an bof wt an\nw/o w/o w/o eo w/o a\nF\n\n(c) Cycle 2: Multiply w with bits 0 of\nthe activations\n\n(d) Cycle 3: Multiply w with bits 1 of\n\nthe activations\nFigure 2: Processing the example Convolutional Layer Using TRT\u2019s Approach.\nAR BR\n\noR\n\nAR BR\n\noR\n\naR eR | OR AR BR | OR aR er | OR AR BR | OR\nob o wp te wap te wilt fof wart wart fof wart\nwifi * w2/d * = =n w1/o |] | w/o * w2/0 |} | w2/0 *\nwal wy/o| _ ft\nafr \u2014_ w2/o _ rn\n) Cycle 1: Shift in bits 1 of (b) Cycle 2: Shift in bits 0 of (c) Cycle 3: Copy AR into BR\neights into the ARs weights into the ARs\nao a/0, \"1 aly\nAR BR OR AR BR OR AR BR OR AR BR OR\nwast bo} warn +] | war bl wan : wat bl was} faa | | wars bof ware] |] oa\nwi/o |} | w/o . . w2/0 |} | w2/0 . . w1/o |} | w/o , f1/0 w2/0 w2/0 . 12/0\nr \u00a5 |\n~ _ 1\n(d) Cycle 4: Multiply weights with(e) Cycle 5: Multiply weights with\n\nfirst bit of a\n\nsecond bit of a\nFigure 3: Processing the example Fully-Connected Layer using TRT\u2019s Approach\nEach subunit contains three 2-bit registers: a shift-register AR, a parallel load register BR, and ar\nparallel load output register OR. Each cycle each subunit can calculate the product of its single bi\nv; input with BR which it can write or accumulate into its OR. There is no bit-parallel multiplie:\nsince the subunits process a single activation bit per cycle. Instead, two AND gates, a shift-and-adc\nfunctional unit, and OR form a shift-and-add multiplier/accumulator. Each AR can load a single bi\nper cycle from one of the 7 wires, and BR can be parallel loaded from AR or from the 7 wires.\nConvolutional Layer: Figure [26] through Figure [2d] show how the CVL is processed. The figures\nabstract away the unit details showing only the register contents. As Figure[2b]shows, during cycle\n1, the w synapse is loaded in parallel to the BRs of both subunits via the 11 and i0 inputs. During\ncycle 2, bits 0 of a; and of az are sent via the v0 and v1 inputs respectively to the first and second\nsubunit. The subunits calculate concurrently a;/0 x w and a2/0 x w and accumulate these results\ninto their ORs. Finally, in cycle 3, bit 1 of a, and az appear respectively on v0 and v1. The subunits\ncalculate respectively a;/1 x w and aj/1 x w accumulating the final output activations c, and c2\ninto their ORs.\nIf the activations a, and az could be represented in just one bit, then this engine would be pro-\nducing two output activations per cycle, twice the bandwidth of the bit-parallel engine. The lattet\nis incapable of exploiting the reduced precision. In general, if the bit-parallel hardware was using\nPhase bits to represent the activations while only P, bits were enough, TRT would outperform the\n\nbit-parallel engine by \u201422 Poase .\nFully-Connected Layer: Figure[3|shows how a TRT-like unit would process the example FCL. As\nFigure [3a] shows, in cycle 1, bit 1 of w; and of we appear respectively on lines 71 and 20. The left\nsubunit\u2019s AR is connected to i1 while the right subunit\u2019s AR is connected to 10. The ARs shift in\nthe corresponding bits into their least significant bit sign-extending to the vacant position (shown as\na 0 bit on the example). During cycle 2, as Figure [3b|shows, bits 0 of w1 and of w2 appear on the\nrespective 7 lines and the respective ARs shift them in. At the end of the cycle, the left subunit\u2019s AR\ncontains the full 2-bit w, and the right subunit\u2019s AR the full 2-bit w2. In cycle 3, Figure[3c|shows that\nthe contents of AR are copied to BR in each subunit. From the next cycle, calculating the products\ncan now proceed similarly to what was done for the CVL. In this case, however, each BR contains\na different weight whereas in the CVL all BRs held the same w value. The shift capability of the\nARs coupled with the different 2 wire per subunit connection allowed us to load a different weight\nbit-serially over two cycles. Figure[3d]and Figure[Belshow cycles 4 and 5 respectively. During cycle\n4, bit 0 of a1 appears on both v inputs and is multiplied with the BR in each subunit. In cycle 5, bit\n1 of al appears on both v inputs and the subunits complete the calculation of f; and fp. It takes two\ncycles to produce the two 2b x 2b products once the correct inputs appear into the BRs.\nWhile in our example no additional inputs nor outputs are shown, it would have been possible tc\noverlap the loading of a new set of w inputs into the ARs while processing the current weights storec\ninto the BRs. That is the loading into ARs, copying into BRs, and the bit-serial multiplication of the\nBRs with the activations is a 3-stage pipeline where each stage can take multiple cycles. In general\nassuming that both activations and weights are represented using 2 bits, this engine would match the\nperformance of the bit-parallel engine in the steady state. When both set of inputs i and v can be\nrepresented with fewer bits, | in this case, the engine would produce two terms per cycle, twice the\nbandwidth of the bit-parallel engine of the previous section.\nSummary: In general, if P,ase the precision of the bit-parallel engine, and PE and PL the preci\nsions that can be used respectively for activations and weights for layer L, a TRT engine can ideall\noutperform an equivalent bit parallel engine by = Poase for CVLs, and by man BE PE for FCLs. Thi\nexample used the simplest TRT engine configuration. Since typical layers exhibit massive paral\nlelism, TRT can be configured with many more subunits while exploiting weight reuse for CVL\nand activation reuse for FCLs. The next section describes the baseline state-of-the-art DNNs accel\nerator and presents an equivalent TRT configuration.\nIn total it took 3 cycles to process the layer. However, at the end of the third cycle, another w\ncould have been loaded into the BRs (the i are idle) allowing a new set of outputs to commence\ncomputation during cycle 4. That is loading a new weight can be hidden during the processing of\nthe current output activation for all but the first time. In the steady state, when the input activations\nare represented in two bits, this engine will be producing two 2b x 2b terms every two cycles thus\nmatching the bandwidth of the bit-parallel engine.\nFigure 5: Overview of the system components and their communication. a) DaDN. b) Tartan\nThis work presents TRT as a modification of the state-of-the-art DaDianNao accelerator. Accord-\ningly, Section 3.1] reviews DaDN\u2019s design and how it can process FCLs and CVLs. For clarity, in\nwhat follows the term brick refers to a set of 16 elements of a 3D activation or weight array\u2019 |input\nwhich are contiguous along the i dimension, e.g., a(x, y,7)...a(x, y,i + 15). Bricks will be denoted\nby their origin element with a B subscript, e.g., ag(x, y,7). The size of a brick is a design parameter."}, {"section_index": "4", "section_name": "3.1 BASELINE SYSTEM: DADIANNAO", "section_text": "[RT is demonstrated as a modification of the DaDianNao accelerator (DaDN) proposed by {Cher\n2t al.| (20146). Figure [4a] shows a DaDN tile which processes 16 filters concurrently calculating I\u00a2\nactivation and weight products per filter for a total of 256 products per cycle. Each cycle the til\naccepts 16 weights per filter for total of 256 synapses and 16 input activations. The tile multiplie:\neach weight with only one activation whereas each activation is multiplied with 16 weights, one pe\nfilter. The tile reduces the 16 products into a single partial output activation per filter, for a total o\n16 partial output activations for the tile. Each DaDN chip comprises 16 such tiles, each processing\ndifferent set of 16 filters per cycle. Accordingly, each cycle, the whole chip processes 16 activation:\nand 256 x 16 = 4K weights producing 16 x 16 = 256 partial output activations, 16 per tile.\nInternally, each tile has: 1) a synapse buffer (SB) that provides 256 weights per cycle one per weight\nlane, 2) an input neuron buffer (NBin) which provides 16 activations per cycle through 16 neuron\nlanes, and 3) a neuron output buffer (NBout) which accepts 16 partial output activations per cycle.\nIn the tile\u2019s datapath each activation lane is paired with 16 weight lanes one from each filter. Each\nsynapse and neuron lane pair feeds a multiplier, and an adder tree per filter lane reduces the 16 per\nfilter products into a partial sum. In all, the filter lanes produce each a partial sum per cycle, for a\n\u2018An FCL can be thought of as a CVL where the input activation array has unit x and y dimensions, and there\nare as many filters as output activations, and where the filter dimenions are identical to the input activation array\n- Window Activation 4\nNBin ee\n16 Lane Btlsneo F\ncre Activation -\u2014\u2014! 1\nfrom central : trom central tne\neDRAM H Acton 4\nee M0800 stan 2a\ntine 4 Window peevaton > 4\nLane 15 sttane2sd\nweet s1(0,0) siP(as,0)\ntine Wee :\nFilter med\nLane 0 fer\npare __| neout ne 0 weit\n: tines\n_<\n1\n| to central\n| Lh eoram\nween weate\ntare reer\nFilter Filter\nLane 15\nWeight tane 15 Weight\ntare lanes\n\u2018S1P(0,15) SIP(15,15)\n$B (eDRAM) SB (eDRAM)\n(a) DaDianNao (b) Tartan\n\nFigure 4: Processing Titles.\n\n: : TileOQ - Tile 15\nTileO Tile 15\nTileO = Tile 15\n\nTile O \u00ab= Tile 15\nFigure 5p shows an overview of the DaDN chip. There are 16 processing tiles connected via ar\ninterconnect to a shared central eDRAM Neuron Memory (NM). DaDN\u2019s main goal was minimizing\noff-chip bandwidth while maximizing on-chip compute utilization. To avoid fetching weights from\noff-chip, DaDN uses a 2MB eDRAM Synapse Buffer (SB) for weights per tile for a total of 32ME\neDRAM.. All inter-layer activation outputs except for the initial input and the final output are storec\nin NM which is connected via a broadcast interconnect to the 16 Input Neuron Buffers (NBin\nbuffers. All values are 16-bit fixed-point, hence a 256-bit wide interconnect can broadcast a ful\nactivation brick in one step. Off-chip accesses are needed only for reading: 1) the input image\n2) the weight once per layer, and 3) for writing the final output.\nProcessing starts by reading from external memory the first layer\u2019s filter weights, and the input\nimage. The weights are distributed over the SBs and the input is stored into NM. Each cycle an\ninput activation brick is broadcast to all units. Each units reads 16 weight bricks from its SB and\nproduces a partial output activation brick which it stores in its NBout. Once computed, the output\nactivations are stored through NBout to NM and then fed back through the NBins when processing\nthe next layer. Loading the next set of weights from external memory can be overlapped with the\nprocessing of the current layer as necessary.\nAs Section|2|explained, TRT processes activations bit-serially multiplying a single activation bit with\na full weight per cycle. Each DaDN tile multiplies 16 16-bit activations with 256 weights each cycle.\nTo match DaDN\u2019s computation bandwidth, TRT needs to multiply 256 1-bit activations with 256\nweights per cycle. Figure[4b|shows the TRT tile. It comprises 256 Serial Inner-Product Units (SIPs)\norganized in a 16 x 16 grid. Similar to DaDN each SIP multiplies 16 weights with 16 activations and\nreduces these products into a partial output activation. Unlike DaDN, each SIP accepts 16 single-bit\nactivation inputs. Each SIP has two registers, each a vector of 16 16-bit subregisters: 1) the Serial\nWeight Register (SWR), and 2) the Weight Register (WR). These correspond to AR and BR of the\nexample of Section [2] NBout remains as in DaDN, however, it is distributed along the SIPs as\nshown.\nConvolutional Layers: Processing starts by reading in parallel 256 weights from the SB as in\nDaDN, and loading the 16 per SIP row weights in parallel to all SWRs in the row. Over the next\nPF cycles, the weights are multiplied by the bits of an input activation brick per column. TRT\nexploits weight reuse across 16 windows sending a different input activation brick to each column.\nFor example, for a CVL with a stride of 4 a TRT tile will processes 16 activation bricks ag(x, y,7),\nap(a +4, y,7) through ag (x + 63, y, 7) in parallel a bit per cycle. Assuming that the tile processes\nfilters f; though f;415, after P\u2019 cycles it would produce the following partial output activations:\nop(x/4,y/4, fi), through og(x/4 + 15, y/4, fi), that is 16 contiguous on the x dimension output\nactivation bricks. Whereas DaDN would process 16 activations bricks over 16 cycles, TRT processes\nthem concurrently but bit-serially over P\u201d cycles. If P\u201d is less than 16, TRT will outperform DaDN\nby 16/P, and when P\u00a3 is 16, TRT will match DaDN\u2019s performance.\nFully-Connected Layers: Processing starts by loading bit-serially and in parallel over P* cycles\n4K weights into the SWRs. Each SWR per row gets a different set of 16 weights as each subregiste:\nis connected to one out of the 256 wires of the SB output bus for the SIP row. Once the weight:\nhave been loaded, the SWRs are copied to the SWs and multiplication with the input activation:\ncan then proceed bit-serially over P\u201d cycles. Assuming that there are enough output activations sc\nthat a different output activation can be assigned to each SIP, the same input activation brick car\nbe broadcast to all SIP columns. For example, for an FCL a TRT tile will process one activatior\nbrick ag (i) bit-serially to produce 16 output activation bricks og (i) through og(i x 16) one per SIF\ncolumn. Loading the next set of weights can be done in parallel with processing the current set, thu:\nexecution time is constrained by PE... = max(P\u00a5,P4\u201c). Thus, a TRT tile produces 256 partia\ntotal of 16 partial output activations per Once a full window is processed, the 16 resulting sums.\nare fed through a non-linear activation function, f, to produce the 16 final output activations. The\nmultiplications and reductions needed per cycle are implemented via 256 multipliers one per weight\nlane and sixteen 17-input (16 products plus the partial sum from NBout) adder trees one per filter\nlane.\n1(20)f | MSB\n\ni_nbout\n(a0)\nLr eB =\nx16 : \u201c o_nbout\nby\n2 \"a i)\nprec\neet\n1(a15) i_nbout\n\nactivation 416\n\nFigure 6: TRT\u2019s SIP.\noutput activations every PL cycles, a speedup of 16/Pmax over DaDN since a DaDN tile always\n\nneeds 16 cycles to do the same.\nFor TRT to be fully utilized an FCL must have at least 4K output activations. Some of the network\nstudied have a layer with as little as 2K output activations. To avoid underutilization, the SIPs alon;\neach row are cascaded into a daisy-chain, where the output of one can feed into an input of the nex\nvia a multiplexer. This way, the computation of an output activation can be sliced over the SIPs alon;\nthe same row. In this case, each SIP processes only a portion of the input activations resulting int\nseveral partial output activations along the SIPs on the same row. Over the next np cycles, wher\nnp the number of slices used, the np partial outputs can be reduced into the final output activation\nThe user can chose any number of slices up to 16, so that TRT can be fully utilized even with fully\n\nconnected layers of just 256 outputs. For example, in NeuralTalk|Karpathy & Li\\(2014) the smalles\n\nlayers can have 600 outputs or fewer."}, {"section_index": "5", "section_name": "3.3. SIP AND OTHER COMPONENTS", "section_text": "SIP: Bit-Serial Inner-Product Units: Figure [6] shows TRT\u2019s Bit-Serial Inner-Product Unit (SIP)\nEach SIP multiplies 16 activations by 16 weights to produce an output activation. Each SIP ha\ntwo registers, a Serial Weight Register (SWR) and a Weight Registers (WR), each containing 1\n16-bit subregisters. Each SWR subregister is a shift register with a single bit connection to one o\nthe weight bus wires that is used to read weights bit-serially for FCLs. Each WR subregister can bi\nparallel loaded from either the weight bus or the corresponding SWR subregister, to process CVL:\nor FCLs respectively. Each SIP includes 256 2-input AND gates that multiply the weights in th\nWR with the incoming activation bits, and a 16 x 16b adder tree that sums the partial products. /\nfinal adder plus a shifter accumulate the adder tree results into an output register. In each SIP, ;\nmultiplexer at the first input of the adder tree implements the cascade mode supporting slicing th\noutput activation computation along the SIPs of a single row. To support signed 2\u2019s complemen\nneurons, the SIP can subtract the weight corresponding to the most significant bit (MSB) from th\npartial sum when the MSB is 1. This is done with negation blocks for each weight before the adde\ntree. Each SIP also includes a comparator (max) to support max pooling layers.\nDispatcher and Reducers: Figure|5p shows an overview of the full TRT system. As in DaDN ther\nis a central NM and 16 tiles. A Dispatcher unit is tasked with reading input activations from NN\nalways performing eDRAM-friendly wide accesses. It transposes each activation and communicate\neach a bit a time over the global interconnect. For CVLs the dispatcher has to maintain a pool o\nmultiple activation bricks, each from different window, which may require fetching multiple row\nfrom NM. However, since a new set of windows is only needed every P/ cycles, the dispatcher cai\nkeep up for the layers studied. For FCLs one activation brick is sufficient. A Reducer per title 1\ntasked with collecting the output activations and writing them to NM. Since output activations tak:\nmultiple cycles to produce, there is sufficient bandwidth to sustain all 16 tiles.\nOther Layers: TRT like DaDN can process the additional layers needed by the studied networks.\nFor this purpose the tile includes additional hardware support for max pooling similar to DaDN.\nAn activation function unit is present at the output of NBout in order to apply nonlinear activations\nbefore the output neurons are written back to NM."}, {"section_index": "6", "section_name": "3.4 PROCESSING SEVERAL BITS AT ONCE", "section_text": "In order to improve TRT\u2019s area and power efficiency, the number of bits processed at once can be\nparameterized. In this case, the weights are multiplied with several activation bits at once, and the\n\nmultiplication results are partially shifted before they are inserted into their corresponding addet\ntree.\nIn order to load the weights on time, the SWR subregister has to be modified so it can load sev-\neral bits in parallel, and shift that number of positions every cycle. The negation block (for 2\u2019s\ncomplement support) will operate only over the most significant product result.\nThe chief advantage of such a design is that less SIPs are needed in order to achieve the same\nthroughput \u2014 for example, processing 2 bits at once allows reducing the number of columns from 16\nto 8. Although the total number of bus wires is similar, the distance they have to cover is significantly\nreduced. Likewise, the total number of adders required stays similar, but they are clustered closer\ntogether.\nA drawback of this design is the limitation to precisions that are exact multiples of the number of\nbits processed at once.\nThis section evaluates TRT\u2019s performance, energy and area and explores the trade-off between ac-\ncuracy and performance comparing to DaDN."}, {"section_index": "7", "section_name": "4.1 METHODOLOGY", "section_text": "Numerical Representation Requirements Analysis: The per layer precision profiles are found vi\nthe methodology of Judd et al.|Judd et al. 2015p. Caffe |Jia et al. (2014) was used to measure hov\nreducing the precision of each FCL affects the network\u2019s overall top-/ prediction accuracy over 500!\nimages. The network definitions and pre-trained synaptic weights are taken from the Caffe Mode\n(2015). Since TRT\u2019s performance for FCLs is bound by the maximum of the weight an\nactivation precisions, our exploration was limited to the cases where both are the same. The searc!\nprocedure is a gradient descent where a given layer\u2019s precision is iteratively decremented one bit 2\na time, until the network\u2019s accuracy drops. For weights, the fixed point numbers are set to represen\nvalues between -1 and 1. For activations, the number of fractional bits is fixed to a previously\ndetermined value known not to hurt accuracy, as per|Judd et al.|(2015). While both activations an\nweights use the same number of bits, their precisions and ranges differ.\nPerformance, Area and Energy: DaDN, STR and TRT were modeled using the same methodol-\nogy for consistency. A custom cycle-accurate simulator models execution time. Computation was\nscheduled as described by (Judd et al.|(2016a) to maximize energy efficiency for DaDN. The logic\ncomponents of the both systems were synthesized with the Synopsys Design Compiler Synopsys\nfor a TSMC 65nm library to report power and area. The circuit is clocked at 980 MHz. The NBin\n\nand NBout SRAM buffers were modelled using CACTI The\neDRAM area and energy were modelled with Destiny|Poremba et al.|(2015).\nFully-Connected Layer Precisions: Table [I] reports the per layer precisions for the CVLs and\nFCLs of the networks studied along with the speedup over DaDN that would be ideally possible.\nThe discussion in this section focuses solely on FCLs. The precisions that can be used vary from\n8 up to 10 bits vs. the 16 bits DaDN uses. The ideal speedup ranges from 63% to 66% with\nno accuracy loss. Additional exploration of the precision space may yield even shorter precisions\nwithout sacrificing accuracy. Modest additional improvements are possible with a loss of 1% in\naccuracy.\nExecution Time: Table|2|reports TRT\u2019s performance and energy efficiency relative to DaDN for the\nprecision profiles in Table[I] separately for the fully-connected layers, for the convolutional layers.\nConvolutional layers Fully connected layers\nPer Layer Activation Ideal | Per Layer Activation and Ideal\nNetwork | Precision in Bits Speedup Weight Precision in Bits | Speedup\n100% Accuracy\nAlexNet | 9-8-5-5-7 2.38 10-9-9 66\nVGG_S_ | 7-8-9-7-9 2.04 10-9-9 64\nVGG_M_ | 7-7-7-8-7 2.23 10-8-8 64\nVGG_I9 | 12-12-12-11-12-10-11-11- 1.35 10-9-9 63\n13-12-13-13-13-13-13-13\n99% Accuracy\nAlexNet | 9-7-4-5-7 2.58 9-8-8 85\nVGG_S__| 7-8-9-7-9 2.04 9-9-8 19\nVGG_M | 6-8-7-7-7 2.34 9-8-8 .80\nVGG_19 | 9-9-9-8-12-10-10-12-13- 1.57 10-9-8 63\n11-12-13-13-13-13-13\nTable 1: Per layer synapse precision profiles needed to maintain the same accuracy as in the base-\nline. Ideal: Potential speedup with TRT over a 16-bit bit-parallel baseline.\nFully Connected Layers\n\nConvolutional Layers\n\nAccuracy 100% 99% 100% 99%\nPerf | Eff | Perf | Eff | Perf | Eff | Perf | Eff\nAlexNet | 1.61 ] 0.92 | 1.80 ] 1.04 | 2.32 | 1.43 ] 2.52 [1.55\nVGG_S 1.61 | 0.92 ] 1.76 | 1.01 | 1.97 |] 1.21 | 1.97 | 1.21\nVGG_M | 1.61 | 0.93 | 1.77 | 1.02 | 2.18 | 1.34 | 2.29 | 1.40\nVGG_19 | 1.60 | 0.92 | 1.61 | 0.93 | 1.35 | 0.83 | 1.56 | 0.96\ngeomean | 1.61 | 0.92 | 1.73 | 1.00 | 1.91 | 1.18 | 2.05 | 1.26\nfable 2: Execution time and energy efficiency improvement with TRT compared to DaDN\nand the whole network. For the 100% profile, where no accuracy is lost, TRT yields, on average, :\nspeedup of 1.61 over DaDN on FCLs. With the 99% profile, it improves to 1.73.\nWe have also performed an evaluation of NeuralTalk LSTM|Karpathy & Li)(2014) which uses lon;\n\nshort-term memory to automatically generate image captions. Precision can be reduced down to 1\nbits withouth affecting the accuracy of the predictions (measured as the BLEU score when compare\nto the ground truth) resulting in a ideal performance improvement of 1.45 x translating into a 1.38>\nspeedup with TRT.\nEnergy Efficiency: This section compares the energy efficiency or simply efficiency of TRT anc\nDaDN. Energy Efficiency is the inverse of the relative energy consumption of the two designs. Thi\naverage efficiency improvement with TRT across all networks and layers for the 100% profile i\n1.17x. In the FCLs, TRT is not as efficient as DaDN, however, the energy efficiency for CVL\nmore than compensates when whole networks are considered except for VGG_19. Regardless, per\nformance would not scale linearly if DaDN was to include more tiles in an attempt to match TRT\u2019\nperformance: under-utilization for most layers in these networks would severely reduce any perfor\nmance improvements delivered via additional tiles under DaDN. Overall, efficiency primarily come:\nfrom the reduction in effective computation following the use of reduced precision arithmetic for th\ninner product operations. Furthermore, the amount of data that has to be transmitted from the SE\nand the traffic between the central eDRAM and the SIPs is decreased proportionally to the chose\nThere are two main reasons the ideal speedup can\u2019t be reached in practice: dispatch overhead and\nunderutilization. Dispatch overhead occurs on the initial P\u00a3 cycles of execution, where the serial\nweight loading process prevents any useful products to be performed. In practice, this overhead\nis less than 2% for any given network, although it can be as high as 6% for the smallest layers.\nUnderutilization can happen when the number of output neurons is not a power of two, or lower than\n256. The last classifier layers of networks designed towards recognition of ImageNet\n\nmT TTADT AY 44.4 4 OID 2k. CTD\nTable 3: Area Breakdown for TRT and DaDN\nFully Connected Layers | Convolutional Layers\n\nvs. DaDN | vs. Ib TRT | vs. DaDN | vs. Ib TRI\nAlexNet +58% -2.06% +208% -11.-71%\nVGG_S +59% -1.25% +76% -12.09%\nVGG_M +63% +1.12% +91% -13.78%\nVGG_19 +59% -0.97% 429% 411%\ngeomean +60% -0.78% +73% - 10.36%\nTable 4: Relative performance of 2-bit TRT variation compared to DaDN and the 1-bit TR7\nArea Overhead: Table 3] reports the area breakdown of TRT and DaDN. Over the full chip, TRT\nneeds 1.49 the area compared to DaDN while delivering on average a 1.90 improvement in\nspeed. Generally, performance would scale sublinearly with area for DaDN due to underutilization.\nThe 2-bit variant, which has a lower area overhead, is described in detail in the next section."}, {"section_index": "8", "section_name": "4.3. Two-BIT AT ONCE PERFORMANCE EVALUATION", "section_text": "We evaluate the performance for a multi-bit design as described in section [3.4] where 2 bits are\nprocessed every cycle in as half as many total SIPs. The precisions used are the same as indicated\nin Table [I] for 100% accuracy, rounded up to the next multiple of two. The results are shown ir\nTable [4] The 2-bit TRT always improves performance compared to DaDN as the \u201cvs. DaDN\u2019\ncolumns show. Compared to the 1-bit TRT performance is slightly lower however given that the\narea of the 2-bit TRT is much lower, this can be a good trade-off. Overall, there are two forces\nat work that shape performance relative to the 1-bit TRT. There is performance potential lost duc\nto rounding all precisions to an even number, and there is performance benefit by requiring les:\nparallelism. The time needed to serially load the first bundle of weights is also reduced. In VGG_1$\nthe performance benefit due to the lower parallelism requirement outweights the performance los:\ndue to precision rounding. In all other cases, the reverse is true.\nA hardware synthesis and layout of both DaDN and TRT\u2019s 2-bit variant using TSMC 65nm typica\ncase libraries shows that the total area overhead can be as low as 24.9%, with an improved energy\nefficiency in fully connected layers of 1.24 on average."}, {"section_index": "9", "section_name": ") RELATED WORK AND LIMITATIONS OF THIS WORK", "section_text": "The recent success of Deep Learning has led to several proposals for hardware acceleration of DNNs\nThis section reviews some of these recent efforts. However, specialized hardware designs for neura\nnetworks is a field with a relatively long history. Relevant to TRT, bit-serial processing hardware fo:\nneural networks has been proposed several decades ago, e.g.,|Svensson & Nordstrom| (1990); Murray\n[et al. (1988). While the performance of these designs scales with precision it would be lower thar\nthat of an equivalently configured bit-parallel engine. For example,\nuses an interesting bit-serial multiplier which requires O(4 x p) cycles, where p the precision it\nbits. Furthermore, as semiconductor technology has progressed the number of resources that can be\nTRT area (mm)\n\nTRT 2-bit area (mm)\n\nDaDN area (mm)\n\nInner-Product Units\n\n57.27 (47.11%)\n\n37.66 (37.50%)\n\n17.85 (22.20%)\n\nSynapse Buffer 48.11 (40.08%) 48.11 (47.90%) 48.11 (59.83%)\nInput Neuron Buffer 3.66 (3.05%) 3.66 (3.64%) 3.66 (4.55%)\nOutput Neuron Buffer 3.66 (3.05%) 3.66 (3.64%) 3.66 (4.55%)\nNeuron Memory 7.13 (5.94%) 7.13 (7.10%) 7.13 (8.87%)\nDispatcher 0.21 (0.17%) 0.21 (0.21%) -\nTotal 120.04 (100%) 100.43 (100%) 80.41 (100%)\n\nNormalized Total\n\n1.49x\n\n1.25x\n\n1.00\nprecision. When the per layer precisions are reduced adequately TRT becomes more efficient than\nDaDN.\nIn general, hardware acceleration for DNNs has recently progressed in two directions: 1) consider-\ning more general purpose accelerators that can support additional machine learing algorithms, and\n2) considering further improvements primarily for convolutional neural networks and the two most\ndominant in terms of execution time layer types: convolutional and fully-connected. In the first\ncategory there are accelerators such as Cambricon|L and Cambricon-X {Zhang et al.\nWhile targeting support for more machine learning algorithms is desirable, work on further\nizing performance for specific algorithms such as TRT is valuable and needs to be pursued as\nit will affect such general purpose accelerators.\nTRT is closely related to Stripes |Judd et al.|(2016cfa) whose execution time scales with precisiot\n\nbut only for CVLs. STR does not improve performance for FCLs. TRT improves upon STR b'\nenabling: 1) performance improvements for FCLs, and 2) slicing the activation computation acros\nmultiple SIPs thus preventing underutilization for layers with fewer than 4K outputs. Pragmatic use\na similar in spirit organization to STR but its performance on CVLs depends only on the number o\nactivation bits that are 1{Albericio et al.|(2016b). It should be possible to apply the TRT extension\nto Pragmatic, however, performance in FCLs will still be dictated by weight precision. The area an\nenergy overheads would need to be amortized by a commensurate performance improvement.\nThe Efficient Inference Engine (EIE) uses synapse pruning, weight compression, zero activation\nelimination, and network retraining to drastically reduce the amount of computation and data com-\nmunication when processing fully-connected layers {Han et al.| ). An appropriately configured\nEIE will outperform TRT for FCLs, provided that the network is pruned and retrained. However.\nthe two approaches attack a different component of FCL processing and there should be synergy be-\ntween them. Specifically, EIE currently does not exploit the per layer precision variability of DNNs\nand relies on retraining the network. It would be interesting to study how EIE would benefit from\na TRT-like compute engine where EIE\u2019s data compression and pruning is used to create vectors of\nweights and activations to be processed in parallel. EIE uses single-lane units whereas TRT uses a\ncoarser-grain lane arrangement and thus would be prone to more imbalance. A middle ground may\nbe able to offer some performance improvement while compensating for cross-lane imbalance.\nEyeriss uses a systolic array like organization and gates off computations for zero activations |Chen\nYu-Hsin and Krishna, Tushar and Emer, Joel and Sze, Vivienne 2016) and targets primarily high:\nenergy efficiency. An actual prototype has been built and is in full operation. Cnvlutin is a SIMD\naccelerator that skips on-the-fly ineffectual activations such as those that are zero or close to zero|Al.\n[bericio et al.] (2016a). Minerva is a DNN hardware generator which also takes advantage of zerc\nactivations and that targets high-energy efficiency (2016). Layer fusion can further\nreduce off-chip communication and create Te SSR eee aay euTg). As multiple\nlayers are processed concurrently, a straightforward combination with TRT would use the maximur\nof the precisions when layers are fused.\nGoogle\u2019s Tensor Processing Unit uses quantization to represent values using 8 bits|Jouppi| (2016) t\nsupport TensorFlow|Abadi et al. (2015). As Table[I]shows, some layers can use lower than 8 bits 0!\nprecision which suggests that even with quantization it may be possible to use fewer levels and t\npotentially benefit from an engine such as TRT.\nLimitations: As in DaDN this work assumed that each layer fits on-chip. However, as network\nevolve it is likely that they will increase in size thus requiring multiple TRT nodes as was suggeste\nin DaDN. However, some newer networks tend to use more but smaller layers. Regardless, it woul\nbe desirable to reduce the area cost of TRT most of which is due to the eDRAM buffers. We have nc\nexplored this possibility in this work. Proteus [Judd et al.| (2016) is directly compatible with TR\nand can reduce memory footprint by about 60% for both convolutional and fully-connected layer:\nIdeally, compression, quantization and pruning similar in spirit to EIE[Han et al.| (2016) would b\nused to reduce computation, communication and footprint. General memory compresion|Mittal \u00a2\nVetter] (2016) techniques offer additional opportunities for reducing footprint and communication.\nWe evaluated TRT only on CNNs for image classification. Other network architectures are impor.\ntant and the layer configurations and their relative importance varies. TRT enables performance\nput on chip and the trade offs (e.g., relative speed of memory vs. transistors vs. wires) are today\nvastly different facilitating different designs. However, truly bit-serial processing such as that used\nin the aforementioned proposals needs to be revisited with today\u2019s technology constraints due to its\npotentially high compute density (compute bandwidth delivered per area).\nimprovements for two of the most dominant layer types. We have also provided some preliminary\nevidence that TRT works well for NeuralTalk LSTM/Karpathy & Li\\(2014). Moreover, by enabling\noutput activation computation slicing it can accommodate relatively small layers as well.\nThis section commented only on related work on digital hardware accelerators for DNNs. Advances\nat the algorithmic level would impact TRT as well or may even render it obsolete. For example, work\non using binary weights |Courbariaux et al.] ) would obviate the need for an accelerator whose\nperformance scales with weight precision. Investigating TRT\u2019s interaction with other network types\nand architectures and other machine learning algorithms is left for future work.\nThis work presented Tartan an accelerator for inference with Deep Learning Networks whose perfor-\nmance scales inversely linearly with the number of bits used to represent values in fully-connected\nand convolutional layers. TRT also enables on-the-fly accuracy vs. performance and energy ef-\nficiency trade offs and its benefits were demonstrated over a set of popular image classification\nnetworks. The new key ideas in TRT are: 1) Supporting both the bit-parallel and the bit-serial\nloading of weights into processing units to facilitate the processing of either convolutional or fully-\nconnected layers, and 2) cascading the adder trees of various subunits (SIPs) to enable slicing the\noutput computation thus reducing or eliminating cross-lane imbalance for relatively small lavers.\nTRT opens up a new direction for research in inference and training by enabling precision adjust-\nments to translate into performance and energy savings. These precisions adjustments can be done\nstatically prior to execution or dynamically during execution. While we demonstrated TRT for in-\nference only, we believe that TRT, especially if combined with Pragmatic, opens up a new direction\nfor research in training as well. For systems level research and development, TRT with its ability\nto trade off accuracy for performance and energy efficiency enables a new degree of adaptivity for\noperating systems and applications."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Jorge Albericio, Patrick Judd, Alberto Delmas Lascorz, Sayeh Sharify, and Andreas Moshovos.\nBit-pragmatic deep neural network computing. Arxiv, arXiv:1610.06920 [cs.LG], 2016b.\n\\pplying some of the concepts that underlie the TRT design to other more general purpose acceler-\ntors such as Cambrico ) or graphics processors would certainly be more preferable\nhan a dedicated accelera' application scenarios. However, these techniques are best first\n\nnvestigated into specific designs and then can be generalized appropriately.\nWe have evaluated TRT only for inference only. Using an engine whose performance scales with\nprecision would provide another degree of freedom for network training as well. However, TRT\nneeds to be modified accordingly to support all the operations necessary during training and the\ntraining algorithms need to be modified to take advantage of precision adjustments.\nHadi Esmaeilzadeh, Emily Blem, Renee St. Amant, Karthikeyan Sankaralingam, and Doug Burger.\nDark silicon and the end of multicore scaling. In Proceedings of the 38th Annual International\nSymposium on Computer Architecture, ISCA \u201911, pp. 365-376, New York, NY, USA, 2011. ACM.\nISBN 978-1-4503-0472-6. doi: 10.1145/2000064.2000108.\nYangqing Jia. Caffe model zoo. https://github.com/BVLC/caffe/wiki/Model-Zoo, 2015.\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser.\ngio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed.\nding. arXiv preprint arXiv: 1408.5093, 2014.\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor Aamodt, Natalie Enright Jerger, Raque\nUrtasun, and Andreas Moshovos. Reduced-Precision Strategies for Bounded Memory in Deey\nNeural Nets. arXiv:1511.05236v4 [cs.LG] . arXivore. 2015.\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor Aamodt, and Andreas Moshovos. Stripes:\nBit-serial Deep Neural Network Computing . In Proceedings of the 49th Annual IEEE/ACM\nInternational Symposium on Microarchitecture, MICRO-49, 2016a.\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor M. Aamodt, Natalie Enright Jerger, anc\nAndreas Moshovos. Proteus: Exploiting numerical precision variability in deep neural networks.\nIn Proceedings of the 2016 International Conference on Supercomputing, ICS \u201916, pp. 23:1-\n23:12, New York, NY, USA, 2016b. ACM. ISBN 978-1-4503-4361-9. doi: 10.1145/2925426\n\n2926294. URL|http: //doi.acm.org/10.1145/2925426.2926294)\nAndrej Karpathy and Fei-Fei Li. Deep visual-semantic alignments for generating image descrip-\ntions. CoRR, abs/1412.2306, 2014. URL/ht to: //arxiv.org/abs/1412.2306|\nPatrick Judd, Jorge Albericio, and Andreas Moshovos. Stripes: Bit-serial Deep Neural Network\nComputing . Computer Architecture Letters, 2016c.\nNaveen Muralimanohar and Rajeev Balasubramonian. Cacti 6.0: A tool to understand large caches\nAlan F Murray, Anthony VW Smith, and Zoe F Butler. Bit-serial neural networks. In Neura\nInformation Processing Systems, pp. 573-583, 1988.\nAVE. FULCINUG, O. IVIL, LAUT SS Lal, J.0. VOUCI, ANU TU \u201cAAIe, LICSULY. SL LUUL LUT TMUGCIIS CIC sills\n3d nvm and edram caches. In Design, Automation Test in Europe Conference Exhibition (DATE)\n2015, pp. 1543-1546, March 2015.\n\nBrandon Reagen, Paul Whatmough, Robert Adolf, Saketh Rama, Hyunkwang Lee, Sae Kyu Lee\nJose Miguel Hernandez-Lobato, Gu-Yeon Wei, David Brooks, undefined, undefined, undefined\nand undefined. Minerva: Enabling low-power, highly-accurate deep neural network accelerators\n2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), OC\n(undefined):267\u2014-278, 2016. ISSN 1063-6897. doi: doi.ieeecomputersociety.org/10.1109/ISCA\n2016.32.\n\nOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhihens\nHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei\nImageNet Large Scale Visual Recognition Challenge. arXiv:1409.0575 [cs], September 2014\narXiv: 1409.0575.\n\nBertil Svensson and T Nordstrom. Execution of neural network algorithms on an array of bit\nserial processors. In Pattern Recognition, 1990. Proceedings., 10th International Conference on\nvolume 2, pp. 501-505. IEEE, 1990.\n\nSynopsys. Design Compiler. http://www.synopsys.com/Tools/\nImplementation/RTLS ynthesis/DesignCompiler/Pages.\nSynopsys. Design Compiler. http://www.synopsys.com/Tools\nImplementation/RTLS ynthesis/DesignCompiler/Pages.\nShijin Zhang, Zidong Du, Lei Zhang, Huiying Lan, Shaoli Liu, Ling Li, Qi Guo, Tianshi Chen, anc\nYunji Chen. Cambricon-x: An accelerator for sparse neural networks. In Proceedings of the 49tl\nInternational Symposium on Microarchitecture, 2016.\njparsh Mittal and Jeffrey S. Vetter. A survey of architectural approaches for data compression in\ncache and main memory systems. JEEE Trans. Parallel Distrib. Syst., 27(5):1524-1536, May\n2016. ISSN 1045-9219. doi: 10.1109/TPDS.2015.2435788. URL\n\n10.1109/TPDS.2015.2435788"}]
HJTXaw9gx
[{"section_index": "0", "section_name": "RECURSIVE REGRESSION WITH NEURAL NETWORKS:\nAPPROXIMATING THE HJI PDE SOLUTION", "section_text": "Vicenc Rubies Royo, Claire Tomlin\nDepartment of Electrical Engineering and Computer Sciences\nUC Berkeley\nMost machine learning applications using neural networks seek to approximate\nsome function g(x) by minimizing some cost criterion. In the simplest case, if one\nhas access to pairs of the form (x, y) where y = g(x), the problem can be framed\nas a regression problem. Beyond this family of problems, we find many cases\nwhere the unavailability of data pairs makes this approach unfeasible. However,\nsimilar to what we find in the reinforcement learning literature, if we have some\nknown properties of the function we are seeking to approximate, there is still hope\nto frame the problem as a regression problem. In this context, we present an\nalgorithm that approximates the solution to a partial differential equation known\nas the Hamilton-Jacobi-Isaacs partial differential equation (HJI PDE) and compare\nit to current state of the art tools. This PDE, which is found in the fields of control\ntheory and robotics, is of particular importance in safety critical systems where\nguarantees of performance are a must."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Artificial neural networks are remarkable function approximators used in a myriad of applications\nranging from complex controllers for robotic SSC TORO Tee\nto simple image classifiers for digit recognition (LeCun et al.|/1989) . They even find uses in physics\nto find approximations to solutions of PDEs and systems of coupled ordinary differential equations\n(ODEs) [1998). Their success is in part achieved by their property of being universal\nfunction approximators (Hornik et al.||I . In order to train a neural network one usually defines\na cost function which captures the \u201dgoodness\u201d of the choice of parameters in our model, and uses\ngradient descent/ascent algorithms to improve them. In supervised learning, for example, input out-\nput data pairs are used to define a cost function such as the mean squared error or the mean absolute\nerror; unfortunately, in many cases the function we want to approximate is unkown. For instance.\nin many reinforcement learning settings one wants to find the optimal policy, a function from state\nvariables to action) | which maximizes the expected sum of discounted rewards of an agent in some\nenvironment. This function is usually unkown a priori, so this problem can\u2019t readily be framed\nas a regression problem using input-output pairs. This assertion becomes blurred, however, when\nlooking at the work of|Mnih et al.] ), where a deep Q-network learns by generating targets and\nminimizing a cost of the form"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Eg anpl(yi \u2014 Qs, a; 6;))\"].\nHere, the targets y; are generated from the same Q-network that is being used to approximate the\nQ-function, hence the neural network has two purposes: approximation and data generation. In this\nwork, we show that this same idea can be extended to the domain of approximating solutions to\npartial differential equations, and in particular the solution to the Hamiton-Jacobi-Isaacs PDE.\nIn control theory and robotics we often want to know how a system evolves in time given some\ninput signal. In particular, one would like to know whether there exists an (optimal) input signal that\ndrives our system to a particular region of interest in our state space and what that input is. For a\ndeterministic system with continuous states and inputs, this problem can be succinctly expressed as\na partial differential equation known as the Hamilton-Jacobi-Isaacs (HJI) PDE.\nLet V : R\u201d x RT \u2014 R. Then, given a time invariant system of the form Ge = f(x,a,b) and\nboundary condition V(x,0) = I(x), where \u00ab \u20ac R\"\u201d is the state vector and a \u20ac A C R\u2122* anc\nb \u20ac B C R\u2122 are inputs to the systen{?| we wish to find the solution to the minimum-payoff HJ]\n\nPDE, associated to the reachability problem:\ndV (x,t)\nOt\nis known as the Hamiltonian. The boundary condition V(a, 0) = I(x) encodes in its zero sub-level\nset (i.e. I(x) < 0) the region of interest in our state space known as the target set T. Lastly, the so-\nlution V (a, t) to (2) encodes the information about all the starting states whose induced trajectories\nwill enter (and possibly leave) T within |t|, given the dynamics and input signals. More precisely.\nfor some starting state x9 and t < 0, V(ao,t) < 0 if and only if the trajectory starting from 2\nenters 7 within |t|.\nTo give some intuition as to why V(x, t) encodes the starting states whose trajectories enter T\nwithin t, let us consider the simpler problem where 4 \u201ci = f(a) is an autonomous system without\nany inputs. Further, let us write (2) as a finite difference in t. With some rearranging, and absorbing\nthe gradient into V (ie. V.V7 f(x)At+V (a, t) \u00a9 V(x+f(ax)At, t)), one can obtain the following\napproximation\nV(a,t \u2014 At) = min{ V(a,t), V(a+ f(x)At, t) }.\nFor the case of one input trying to drive our system into 7, the approximation becomes\nV(a,t \u2014 At) = min{ V(2,t) , min V(a + f(a, b)At,t) }.\nV(a,t \u2014 At) \u00a9 min{ V(a,t) , max min V(a + f(a, a, b)At, t) }.\n\u201ca is usually taken to be the input and b is taken to be some bounded input disturbance\nH(x,V2V) i= max min V.V\" f(x, a,b)\nIt is straightforward to see from (4) that at time t = 0 all the states outside of T (i.e. V(x,0) > 0)\nbut near its boundary, whose induced trajectories enter the target (i.e. V(x+ f(a)At,0) < 0) within\nAt, will become negative in V(x, \u2014At). Thinking of this update recursively one can intuitively see\nhow the zero sub-level set of V grows backward in time to include more and more states.\nUsing the previous analogy of the autonomous system, one can see how (5) and {6p are essentially\ndifferent ways to expand the zero sub-level set backward in time: can be seen as an input trying\nto expand the set as fast as possible; (9) can be seen as two inputs with competing goals, where one\ninput tries to expand the set and the other seeks to prevent its growth. Moreover, this last setting\nshows the relevance of the HJI PDE in safety critical systems. By treating input b as a bounded\nworse case disturbance and J as some unsafe region, one can establish safety guarantees about the\nsystem and claim which states won\u2019t be driven into 7 within some time horizon.\nb* = argmin VV (xo, t)? f(xo,b)\nbeEB\nN\n\nLo:= Ss G(xi, Wo(xi), Vvo(axi), V2ve(ai))?\n\ni=1\nwhere G(x, (x), V(x), V7u(x)) = 0 is the PDE whose solution 7(2) we are trying to ap-\nproximate and Mi are Pow taken from the discretization of our domain. In (8). the function\nwo(a) := A(x) + F(x, No(x)) is a candidate approximation which by construction satisfies the\nboundary condition, where Ng(x) is a feedforward neural network. In order to ensure that the con-\nditions at the boundary are satisfied, F (2, Ng(a)) = 0 at the boundary and A(z) is a fixed function\nwhich satisfies them.\nAlthough this approach is well suited for some problems, special care must be taken when com-\nputing the gradient of the loss with respect to the parameters. For instance, following the previous\nprocedure, the loss for HJI PDE would be written as"}, {"section_index": "3", "section_name": "4.1 ALGORITHM", "section_text": "In this section we present a simple method for approximating the solution to by utilizing a\nfeedforward neural network in two ways: as a function approximator and a data generator. We\nbelieve that this parametric approach is better suited for finding good approximations by avoiding\nsome of the limitations found in gridding/tabular techniques due to the curse of dimesionality. To\nLastly, it is important to note that V (x,t) contains useful information in its gradient V,,V (x,t). In\nthe case where de = f(x,) has a single input, the argument minimizing the following optimization\nproblem\nyields the instantaneous optimal input for state xo at time t to guide the trajectory into 7 as fast as\npossible. Using this fact one can generate an optimal control policy based on the gradient of V. This\nidea can then be easily extended to the case of two competing inputs to obtain competing control\npolicies. Finally, even though (7) need not be a convex problem, in this work we will only deal with\nsimple dynamical systems, making the optimization problem easy to solve.\nThe problem presented in section[2|(as in many other cases with PDEs) is general not straightforward\nto solve. For this reason, trying to find a good approximation instead of the actual solution can\nbe a reasonable approach. Many current state-of-the-art tools used to approximate solutions of\nPDEs, including Q). use gridding techniques (Mitchell|/2007) whereby finite differences are used to\n\niteratively update values on a grid. Another approach (Lagaris et al.|/1998) is to train a feedforward\nneural network by minimizing the following loss\nLo:= sot) + min{0, H(ai, V2V)})\u00b0,\ni=l\nbut the min in the function makes this expression not differentiable everywhere. There exist ways to\n\ncircumvent this problem (Djeridane and Lygeros {2006}, however, but they require the cumbersome\n\ndefinition of many intermediary functions which can become hard to find for complicated dynamical\nmodels.\nIn this work, we try to tackle the problem of finding an approximate solution to (2) from a different\nperspective. We show that a poor approximation to our solution is enough to generate \u201cgood enough\u201d\nnew data for regression, which can in turn be used to improve our model.\nthat end, we start by defining our candidate approximation Vo(x) to be of the same form as in\n(Lagaris et al.| 1998); that is, a sum of two terms which help satisfy our boundary condition V (, 0)\nwhere No (x,t) is a neural network mapping from our states and time variables to the real numbers.\nNext, we sample N points in the state variable x chosen uniformly at random over some set S' whict\nincludes T (the target set), and similarly, sample N points in the time variable t uniformly at random\nover the set [\u2014T, 0], where T' > 0 is the desired time horizon. By sampling from these distributions.\nwe seek to find a good approximation to V(x,t) over the set S x [\u2014T,0]. Once these points have\nbeen gathered, we make use of the update (4| or (6) (depending on our problem) and use Vola, t),\nthe approximation itself, to generate the new regression points. The complete algorithm|4. Tis showr\nusing update equation (6, but it should be clear how to modify it for the other cases.\nAlgorithm 1 Recursive Regression via SGD with Momentum"}, {"section_index": "4", "section_name": "4.2 COMMENTS", "section_text": "Algorithm [4.1] is a type of bootstrapping method in that lines 12 and 13 make use of Va(x, t) to\ngenerate points for regression to train No(z, t) which in turn modify Va( x,t) itself. At first glance,\nit is unclear whether the generated pairs ((x;,t;),y;) will result in a good approximation to the\nsolution of our PDE after regression; however, given the form of our candidate function we\nexpect that points sampled near \u00a2 = 0 will in fact be reasonable approximations of V(, t) for small\nt. Given this assumption, we hypothesize that despite the presence of misleading data, our network\nwill be able to do a good job at regressing over all points, thus improving our initial model and\nallowing the generation of improved data. By repeating this procedure, we expect the accuracy of\nthe boundary condition to \u2019propagate\u201d backward in time (possibly with some minor error) in the\nform of better and better points for regression.\nAnother important aspect from line 13 is that we are simulating our dynamics forward in time using\nthe Euler approximation step x; + f(2;,a*,b*)At. In practice, depending on the variability and\ncomplexity of the dynamics, one might use a Runge-Kutta method or a more involved integration\nprocedure. For the experiments in the next sections a Runge-Kutta method with 4 stages (RK4) was\nused.\nVo(x,t) = V(a,0) + tNo(z,t),"}, {"section_index": "5", "section_name": "5 EXPERIMENTS", "section_text": "In this section we present a few 2-dimensional experiments to demonstrate the validity of our claim\nand the effectiveness of the algorithm. To measure the performance of the algorithm, we compare\nthe difference between our computed approximation and the true analytical solution. In case it is\nnot straightforward to obtain the solution, a very accurate approximation taken from state-of-the-ar\ntools is used instead. In particular, we make use of the LevelSet Toolbox from [Mitchell] (2007), F\npowerful computational tool for obtaining good approximations to Hamilton-Jacobi (HJ) PDEs.\n[he first error metric to be used will be\nwhere M are the number of points chosen from our domain to compute the average absolute erro!\nand V (x,t) can denote either the true solution or an accurate approximation. In the case where the\nanalytical solution is known, the points are taken uniformly at random over S; otherwise, they ar\ntaken over some grid in S and [\u2014T, 0]. Lastly, we also use a second error metric"}, {"section_index": "6", "section_name": "5.1 A LINEAR SYSTEM", "section_text": "Figure 1: From left to right: the first figure shows the mean absolute error \u00a3, the second figure\nshows the mean absolute PDE error EF\u00bb and the third figure shows the loss Lp as defined in algorithm\n[4-Tlover all the data. The horizontal axis represents the iteration number.\nE,(Vo(w,t)) : - eM (i, ti) \u2014 Vo(wi, ti)|\nBo(Va(\u00ab,t)) = iy Ovi) + min{0, H(xi, VeV)}|\nsimilar to the one defined in (P). which denotes the extent by which (on average) the approximation\nis violating the PDE equality. For all experiments MZ = 3000, all chosen uniformly at random over\nS' x [\u2014T, 0]. In section|5.4]we also show a visual representation of the approximations.\nIn this experiment we study the performance of the algorithm on an autonomous system of the form\nwith V(x, 0) = ||2||2 \u2014 1 and T = 1.0. For this simple system, the solution to the HJI PDE can be\nfound analytically to be V(x,t) = e~\u2018||a||2 \u2014 1. One can easily verify this by checking it satisfies\nthe boundary condition and (2). For this experiment, a feedforward neural network with a single\nhidden layer of 10 units and sigmoid activation functions was used. The number of points sampled\nwas chosen to be N = 500, uniformly picked over the set S := {(21,%2)|r1, 72 \u20ac [\u20145,5]} and\nover t \u20ac [\u2014T, 0]. The batches were picked to be of size K = 10, momentum decay 7 = 0.95 and\nlearning rate 7 = 0.1. The interval to renew the regression points was chosen to be 1000 iterations\nand the program was halted at 500,000 iterations.\nLoss\n\n+ 4.0 = 25\n3.5\n\n3.0 20\n\n2.5 15\n2.0\n\n15 10\n10\n\n5\n0.5\n\n0.0 - 0\n\n250000 0 250000 0 250000"}, {"section_index": "7", "section_name": "5.2 PURSUIT-EVASION GAME: SINGLE INPUT", "section_text": "In this experiment we explore a pursuit-evasion game where a pursuer has to intercept an evader. In\na first simplified approach, we assume the evader has a fixed heading and speed, whereas the pursuer\nhas the same speed as the evader but has the liberty to change the direction of its heading. Fixing\nthe evader at the origin with its heading aligned with the x-axis we frame the problem in relative\ncoordinates between the evader and pursuer, that is x = [x, y,]\", where x, and y,. represent the 2\nand y position of the pursuer relative to the evader. This system\u2019s dynamics are readily encoded in\nthe following equation\nwhere v, = ve = 2.0 represent the speed of the pursuer and evader respectively, b \u20ac [0, 27}\nrepresents the input available to the pursuer, which is the angle with respect to the x-axis. In this\nsimplified pursuit-evasion game we say the pursuer has captured the evader if they are within | unit\nof distance from each other. Thus, we define our capture condition by defining V(x, 0) = ||2||2 \u20141,\nwhich will ensure that our approximation captures all the states from which the pursuer can capture\nthe evader in within T = 1.0. As in the previous example, we choose the same network architecture\nand the same values for the halting time. renewal interval. N.K.vy and n.\n1.2\n\n1.0\n\n0.8\n\n0.6\n\n0.4\n\n25\n\nLoss\n\n2.0\n\nL5\n\n1.0\n\n0.5\n\n0.0\n\n250000\n\n250000\n1.0\n\n0.8\n\n0.6\n\n0.4\nFigure 2: From left to right: the first figure shows the mean absolute error E), the second figur\u00e9\nshows the mean absolute PDE error EF\u00bb and the third figure shows the loss \u00a3\u00a2 as defined in algorithn\n[4-Tlover all the data. The horizontal axis denotes iteration number.\nThe results shown in Fig. |2]where also taken over 10 runs of the algorithm like in section The\noverall time to run the 500,000 iterations was 1952 seconds. The average EF; error at halting time\nwas also in the order of 7 x 10~?, whereas the E>\u00bb error was in the order of 1.5 x 10-1. The points\nused to compute \u00a3 were taken from a 51 x 51 grid at t = \u20140.5 (half of the time horizon), using\na previously computed approximation from the LevelSet Toolbox. The reason why a single time\ninstance was used to compute FE) was purely to reduce the amount of computation of the error at\nrun-time.\n[ie] = Feea.ay = [Pere \u201c|\nThe results shown in Fig. [I] where taken over 10 runs of the algorithm concurrently executed over\nmultiple threads. The overall time to run the 500,000 iterations for all threads was 1521 seconds.\nThe average F; error at halting time was in the order of 7 x 10~?, whereas the E error was in the\norder of 3 x 1071. The sharp jumps appearing in the loss figure in the majority of cases correspond\nto the error after new points are generated and used for regression.\n[i] ren = [Pe\nThe last experimental example also consists of a pursuit-evasion game, but in this case the evader\nhas access to a range of speeds through an input a \u20ac [\u20142, 2]. The system dynamics thus become\n0.40\n0.35\n0.30\n0.25\n0.20\n0.15\n0.10\n0.05\n\n0.00\no\n\nLoss\n\n150000\n\n150000\nFigure 3: From left to right: the first figure shows the mean absolute error \u00a31, the second figure\nshows the mean absolute PDE error > and the third figure shows the loss Lg as defined in algorithrr\n4. Tlover all the data.\nThe results shown in Fig. [3]where also taken over 10 runs of the algorithm. The overall time to rut\nthe 300,000 iterations over the all threads was 1028 seconds. The average E; error at halting time\nwas in the order of 6 x 10~?, whereas the E\u00bb error was in the order of 1.5 x 1071. Like in the\nsingle input case, the points used to compute E, were taken from a 51 x 51 grid at t = \u20140.5 of i\npre-computed approximation."}, {"section_index": "8", "section_name": "5.4 CONTOUR VISUALIZATION", "section_text": "In this section we briefly display some of the contours for a neural network picked at random\nfrom those computed in the experimental section. Each line corresponds to the set of states where\nVo(a, t) = 0 for t = 0, \u20140.25, \u20140.5, \u20140.75, \u20141.0. These contours enclose within them the states\nfrom which our system can reach the target set 7 within the absolute value of its associated time.\nFigure 4: From left to right: contours for experiment one, experiment two and experiment three. As\none can appreciate, the contours grow according to the specified dynamical model.\nAs expected, the linear system\u2019s contours expand radially in all directions since the origin is a stable\nequilibrium point?) where all trajectories converge. For the pursuit-evasion game of one input, we\nalso see that the contours grow toward the right, which is a sensible outcome given that the pursue:\ncan\u2019t catch up with the evader if it starts somewhere where az, < \u20141.0. Finally, the last set o!\ncontours associated with the pursuer-evader game of two competing inputs also make sense, sinc\u00ab\nstarting states x, < \u20141.0 or x, > 1.0 should not permit the pursuer to intercept the evader, and sc\n\u201cwith the same negative real part for the eigenvalues\nand, similarly, V(2,0) = ||a||2 \u2014 1 and T = 1.0. As before, v, = 2.0. The interesting behavior we\nexpect to see from this experiment, in comparison to the single input counterpart, is that this new\navailable action to the evader will make it more difficult for the pursuer to intercept. This should\nthen be evident by looking at our approximation Vp and its zero sub-level sets at different times. For\nthis experiment we also chose the same architecture for the network as in the previous experiments\nand the same parameters, except for the halting time which was 300,000 iterations.\nthe contours should not expand in those directions. As a last comparison, in Fig. [| we display the\nactual contours that would be obtained using the LevelSet Toolbox.\na\nFigure 5: Contours obtained from the LevelSet Toolbox in Matlab.\nBy comparing Fig. and[4Jone can qualitatively see that the neural network has learned an accurate\napproximation of V(x, t).\nThe first advantage of using this method over gridding techniques is a dramatic improvement in\nmemory requirements. For instance, using a standard grid with [51, 51, 10] discretization points pet\naxis (i.e. 51 in #,, 51 in y, and 10 in t) each of the three previous experiments requires the storage\nof 26,010 numbers, as opposed to 51 weights for our neural network. For the gridding approach\nthis memory requirement must increase exponentially with the number of dimensions, whereas this\nneed not be the case for our method. Furthermore, points that do not fall exactly on the grid have\nto be interpolated, whereas the neural network is an approximation that assigns values to all points\nin the domain. To this we can also add that fact that the neural network can yield the gradient at\nany point directly with backpropagation, whereas the gradient must once again be approximated fot\ngridding techniques.\nThe main disadvantage of this method, for small dimensional systems in particular, is the tim\nrequirement. Computing values over a grid with the LevelSet Toolbox for the previous systems tool\nless than 10 seconds. This advantage of gridding/tabular procedures, however, quickly disappears it\nhigher dimensions (4D, 5D...) due to the curse of dimensionality. Finally, another disadvantage o\nusing this method is the necessity to tune hyper parameters."}, {"section_index": "9", "section_name": "/ CONCLUSION AND FUTURE WORK", "section_text": "In this work we focus our attention on the idea that recursive/bootstrapped regression can be used\nin some problems where the function we wish to approximate has some known characteristics. In\nparticular, we show that accurate approximations to the HJI PDE solution can be found by assigning\na neural network two roles, one of them being function approximation, and the other data gener-\nation.To validate our hypothesis three different experiments with three distinct dynamical systems\nwere performed with satisfactory results.\nIn this work we did not focus on the architecture of the neural network, but rather on its ability to\nperform well on three distinct tasks using the same algorithm. In future work we will try to find\nwhether one can construct wider or deeper neural networks and obtain better results. We also want\nto investigate how well this method scales with the number of state and input dimensions. Positive\nresults in that front could suppose an important step to further alleviate the effects of the curse of\ndimensionality, which are pervasive in griding methods."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "Special thanks to Carlos Florensa for his implementation tips and to Jaime F. Fisac for helping it\nthe process of writing this work."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-End Training of Dee\nVisuomotor Policies. Journal of Machine Learning Research, 17:\\40, 2016. ISSN 15337928\ndoi: 10.1007/s13398-014-0173-7.2.\nJohn Schulman, Sergey Levine, Michael Jordan, and Pieter Abbeel. Trust Region Policy Optimiza\ntion. Icml-2015, page 16, 2015. ISSN 2158-3226. doi: 10.1063/1.4927398.\nIan Mitchell. A toolbox of level set methods. Technical report, 2007.\nBadis Djeridane and John Lygeros. Neural approximation of PDE solutions: An application to\nreachability computations. Proceedings of the 45th IEEE Conference on Decision and Control,\npages 3034-3039, 2006. ISSN 01912216. doi: 10.1109/CDC.2006.377184.\n(, LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel\nBackpropagation Applied to Handwritten Zip Code Recognition, 1989. ISSN 0899-7667."}, {"section_index": "12", "section_name": "8 EXTRA EXPERIMENT", "section_text": "This experiment was designed to test the applicability of the method to problems beyond those\npresented in the previous sections. In particular, we show that with small changes we can alsc\ncompute an accurate approximation to a pursuit-evasion problem in 3 dimensions. Similar to the\nprevious examples, we frame the problem in relative coordinates with the x-axis aligned with the\nevader\u2019s heading, and give the pursuer and evader control over the rate of rotation. This can be\nwritten as follows:\nFor this problem the capture condition is encoded in the boundary condition V(xz,0) =\n\\l[z- yr]7||2 \u2014 1 (where we ignore 6,. since the capture condition only depends on the distance)\nand we consider a the time horizon T\u2019 = 1.0s. For this problem we give both pursuer and evader\nthe same speed v, = v, = 1.0 and the same turning rates a,b \u20ac [\u20141,1]. Unlike the previous\nexperiments, we used a neural network with two hidden layers with 10 and 5 units respectively and\nsigmoid activations. The number of points sampled was chosen to be N = 2000, uniformly picked\nover the set S := {(2,, Yr, 9r)|@r, Yr \u20ac [\u20145, 5],0> \u20ac [-m, 7]} and over t \u20ac [\u2014T,0]. The batches\nwere picked to be of size K = 25, momentum decay 7 = 0.999 and learning rate 7 = 0.001. The\ninterval to renew the regression points was chosen to be 1000 iterations and the program was halted\nat 500,000 iterations.\nLOSS\n\n0.25 a 0.50 10\n0.45 9\n0.20 Joao 8\n0.35 7\n0.15 0.30 6\n0.10 0.25 5\n0.20 at\n0.05} 0.15 3\nlo.10 2\n0.00 0.05 1\n0 250000 250000 0 250000\nFigure 6: From left to right: the first figure shows the mean absolute error \u00a3, the second figure\nshows the mean absolute PDE error EF\u00bb and the third figure shows the loss \u00a3\u00a2 as defined in algorithm\n[4.TJover all the data.\nAs shown in Fig. (6 both error metrics decrease as the algorithm progresses, reaching an averag\nerror for E, in the order of 5.0 x 10~? and an average error for E in the order of 1.0 x 1071. Thi\n\npoints used to compute FE; were taken from a 51 x 51 x 50 approximation grid at t = \u20140.5s. Thi\nset of experiments was run in a different machind#Jusing 8 threads and the total time for all thread\nto finish was 1000 seconds. Finally, Fig. [7|shows the zero level set contour at t = \u20140.5, which i\n\nnow a 3D surface, from side and top perspectives. The first row shows the output of the LevelSe\nToolbox from each perspective, and the second row shows a 3D scatter plot of points on the zer\nlevel-set obtained from one of the 8 neural networks that were trained.\n\u201cdue to heavy usage of the first machine we had to switch to a different one\nty. Ve + vpcos(O,) + aYr\nYr | = f(a,a,b) = vpsin(6,,) \u2014 ax,\n6, b-a\ntals\n\n2345\n\n0\n\nS492\n\nt205\n\nta05\n\n6\n\nfe\n\n\u00a2\n\n1\nFigure 7: The first column shows the first side view perpendicular with respect to the x-z plane. Th\nsecond column shows the second side view perpendicular with respect to the y-z plane. Finally, th\nthird column shows the top view which is perpendicular with respect to the x-y plane.\nFor this experiment, only 111 numbers were needed to store the approximation, as opposed to 51 x\n51x 50x10 = 1300500 numbers (i.e. 51 in x,, 51 in y,, 50 in 6, and 10 in t) for a [51 x 51x 50x 10)\ngrid approximation."}]
S13wCE9xx
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "In this paper, we consider the problem of embedding words into a low-dimensional space in order\nto measure the semantic similarity between them. As an example, how to find whether the word\n\u201ctable\u201d is semantically more similar to the word \u201cstool\u201d than to the word \u201csky\u201d? That is achieved\nby constructing a low-dimensional vector representation for each word and measuring similarity\nbetween the words as the similarity between the corresponding vectors.\nStep 1. Search for a low-rank matrix X that provides a good SGNS objective value;\nAlexander Fonarev!?*, Alexey Grinchuk'!, Gleb Gusev\u201d, Pavel Serdyukov\u2019, Ivan Oseledets!4\nl\u00a7kolkovo Institute of Science and Technology, Moscow, Russia\n\n2Yandex LLC, Moscow, Russia\n\n3SBDA Group, Dublin, Ireland\n\n\u201cInstitute of Numerical Mathematics, Russian Academy of Sciences, Moscow, Russia\n\nnewo@newo.su, oleksii.hrinchuk@skolkovotech.ru, gleb57@yandex-team.\nNN tre ne ee eee eee i a: as a . es i: i a: in"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Skip-Gram Negative Sampling (SGNS) word embedding model, well known by\nits implementation in \u201cword2vec\u2019\u201d software, is usually optimized by stochastic gra-\ndient descent. It can be shown that optimizing for SGNS objective can be viewed\nAs an optimization problem of searching for a good matrix with the low-rank con-\nstraint. The most standard way to solve this type of problems is to apply Rieman-\nnian optimization framework to optimize the SGNS objective over the manifold of\nrequired low-rank matrices. In this paper, we propose an algorithm that optimizes\nSGNS objective using Riemannian optimization and demonstrates its superiority\nover popular competitors, such as the original method to train SGNS and SVD\nover SPPMI matrix.\nOne of the most popular word embedding models by Mikolov et al. (2013) is a discriminative neural\nnetwork that optimizes Skip-Gram Negative Sampling (SGNS) objective (see Equation 3). It aims at\npredicting whether two words can be found close to each other within a text. As shown in Section 2,\nthe process of word embeddings training using SGNS can be divided into two general steps with\nclear objectives:\nUnfortunately, most previous approaches mixed these two steps into a single one, what entails <\n10t completely correct formulation of the optimization problem. For example, popular approache:\n0 train embeddings (including the original \u201cword2vec\u201d implementation) do not take into accoun\nhat the objective from Step 1 depends only on the product X = WC\u2019: instead of straightforwarc\n-omputing of the derivative w.r.t. X, these methods are explicitly based on the derivatives w.r.t\nW and C, what complicates the optimization procedure. Moreover, such approaches do not tak\u00ab\nnto account that parametrization WC\u2122 of matrix X is non-unique and Step 2 is required. Indeed\n\u2018or any invertible matrix S, we have X = WiC] = W,SS~!C] = W2CJ, therefore, solution:\nWC; and W2C% are equally good in terms of the SGNS objective but entail different cosine sim:\nlarities between embeddings and, as a result, different performance in terms of linguistic metric:\n\u2018see Section 4.2 for details).\nA successful attempt to follow the above described steps, which outperforms the original SGNS op\ntimization approach in terms of various linguistic tasks, was proposed by Levy & Goldberg (2014)\nIn order to obtain a low-rank matrix X on Step 1, the method reduces the dimensionality of Shifte\nPositive Pointwise Mutual Information (SPPMJ) matrix via Singular Value Decomposition (SVD)\nOn Step 2, it computes embeddings W and C via a simple formula that depends on the factors ob\ntained by SVD. However, this method has one important limitation: SVD provides a solution to |\nsurrogate optimization problem, which has no direct relation to the SGNS objective. In fact, SVI\nminimizes the Mean Squared Error (MSE) between X and SPPMI matrix, what does not lead t\nminimization of SGNS objective in general (see Section 6.1 and Section 4.2 in Levy & Goldber,\n(2014) for details).\nThese issues bring us to the main idea of our paper: while keeping the low-rank matrix search setuy\non Step 1, optimize the original SGNS objective directly. This leads to an optimization problen\nover matrix X with the low-rank constraint, which is often (Mishra et al. (2014)) solved by applyin;\nRiemannian optimization framework (Udriste (1994)). In our paper, we use the projector-splittin;\nalgorithm (Lubich & Oseledets (2014)), which is easy to implement and has low computationa\ncomplexity. Of course, Step 2 may be improved as well, but we regard this as a direction of futur\nwork.\nTo summarize, the main contributions of our paper are:\nIn this paper, we consider the Skip-Gram Negative Sampling (SGNS) word embedding model\n(Mikolov et al. (2013)), which is a probabilistic discriminative model. Assume we have a text cor-\n\npus given as a sequence of words wj,...,Wn, where n may be larger than 10!? and w; \u20ac Vw\nbelongs to a vocabulary of words Vy. A context c \u20ac Vo of the word w; is a word from set\n{wi L, ++) Wi-1, Wi41, ++; Wi+L} for some fixed window size L. Let w,c \u20ac R\u00a2 be the word embed-\n\ndings of word w and context c, respectively. Assume they are specified by the following mappings:\nW:Vw oR, C: Vo > Rt.\nThe ultimate goal of SGNS word embedding training is to fit good mappings W and C.\nIn the SGNS model, the probability that pair (w:\n\na Sr\n\n$ observed in the corpus is modeled as a follow.\n1\n1+ .exp(\u2014(w,c))\u2019\n\nP ((w,c) \u20ac D\\w,c) = o((w,c))\nIn order to collect a training set, we take all pairs (w, c) from D as positive examples and k randomly\ngenerated pairs (w,c) as negative ones. Let #(w.c) be the number of times the pair (w,c) appears\nAs a result, our approach achieves the significant improvement in terms of SGNS optimization on\nStep 1 and, moreover, the improvement on Step | entails the improvement on Step 2 in terms of\nlinguistic metrics. That is why, the proposed two-step decomposition of the problem makes sense,\nwhat, most importantly, opens the way to applying even more advanced approaches based on it (e.g.,\nmore advanced Riemannian optimization techniques for Step 1 or a more sophisticated treatment of\nStep 2).\ne We reformulated the problem of SGNS word embedding learning as a two-step procedure\nwith clear objectives;\n\ne For Step 1, we developed an algorithm based on Riemannian optimization framework that\noptimizes SGNS objective over low-rank matrix X directly;\n\ne Our algorithm outperforms state-of-the-art competitors in terms of SGNS objective and\nthe semantic similarity linguistic metric (Levy & Goldberg (2014); Mikolov et al. (2013);\nSchnabel et al. (2015)).\nwhere D is the multiset of all word-context pairs (w,c) observed in the corpus and (x, y) is the\nscalar product of vectors x and y. Number d is a hyperparameter that adjusts the flexibility of the\nmodel. It usually takes values from tens to hundreds.\n#(w, c)(log o((w,c)) +k: Ew ap, log o(\u2014(w,c\u2019))).\n#(w,c)(log o((w,c)) + k- Ew vp, log a(\u2014(w,c\u2019))).\nl= Ss Ss #(w,c)(log o((w,c)) + k- Ew vp, log o(\u2014(w,c\u2019))) > ma\n\nweVw ceVo\nRelying on the prospect proposed by Levy & Goldberg (2014), let us show that the optimization\nproblem given by (3) can be considered as a problem of searching for a matrix that maximizes a\ncertain objective function and has the rank-d constraint (Step 1 in the scheme described in Section 1)."}, {"section_index": "2", "section_name": "2.2.1 SGNS Loss FUNCTION", "section_text": "As shown by Levy & Goldberg (2014), the logarithmic likelihood (3) can be represented as the sum\nof 1, \u00a2(w, \u00a2) over all pairs (w,c), where l,,, .(w,c) has the following form:\n#(w)#(o)\n|D|\n\nlw.c(w, \u00a2) =#(w, c) log o((w,c)) +k log a(\u2014(w,c)).\nA crucial observation is that this loss function depends only on the scalar product (w, c) but not on\nembeddings w and c separately:"}, {"section_index": "3", "section_name": "2.2.2 MATRIX NOTATION", "section_text": "X=(twc), we Vw.ce\u00e9 Va.\nF(X) = SOY fureltwe), FR\" +R\n\nweVw ceVoe\nmaximize F(X),\nXeRnxm\n\nsubjectto X \u00a9 M,\nMg = {x eR \u2122*\u2122.\n: rank(X) = d}\nDenote Mw | as n and |Vc| as m. Let W \u20ac R\"*\u00a2 and C \u20ac R\u2122*\u201c be matrices, where each row\nw \u20ac R\u00a2 of matrix W is the word embedding of the corresponding word w and each row c \u20ac R\u00a2 of\nmatrix C' is the context embedding of the corresponding context c. Then the elements of the product\nof these matrices\nProposition 1 SGNS optimization problem given by (3) can be rewritten in the following con-\nThe key idea of this paper is to solve the optimization problem given by (6) via the framework o\nRiemannian optimization, which we introduce in Section 3.\nImportant to note that this prospect does not suppose the optimization over parameters W and C\ndirectly. This entails the optimization in the space with ((n + m \u2014 d) - d) degrees of freedom\n(Mukherjee et al. (2015)) instead of ((n + m) - d), what simplifies the optimization process (see\nSection 5 for the experimental results)."}, {"section_index": "4", "section_name": "2.3. COMPUTING EMBEDDINGS FROM A LOW-RANK SOLUTION", "section_text": "Once X is found, we need to recover W and C such that X = WC' (Step 2 in the scheme described\nin Section 1). This problem does not have a unique solution, since if (W,C) satisfy this equation,\nthen WS! and C'S\" satisfy it as well for any non-singular matrix S. Moreover, different solutions\nmay achieve different values of the linguistic metrics (see Section 4.2 for details). While our paper\nfocuses on Step 1, we use, for Step 2, a heuristic approach that was proposed by Levy et al. (2015)\nand it shows good results in practice. We compute SVD of X in the form X = UNV\", where U\nand V have orthonormal columns, and \u00a9 is the diagonal matrix, and use\nW=UV>, C=VV>"}, {"section_index": "5", "section_name": "as matrices of embeddings.", "section_text": "A simple justification of this solution is the following: we need to map words into vectors in a wa}\nthat similar words would have similar embeddings in terms of cosine similarities:\n(Wi, W2)\n\ncos(wi,W2) =\nIt is reasonable to assume that two words are similar, if they share contexts. Therefore, we can\nestimate the similarity of two words w), w as s(wW1,W2) = Yoeye Twice * Two,c, What is the\nelement of the matrix XX\" with indices (w1, w2). Note that XX! = USVTVSUT = ux?ut.\nIf we choose W = UY, we exactly obtain (w1, w2) = s(w1, we), since WW! = XX\u2019 in this\ncase. That is, the cosine similarity of the embeddings w1, w2 coincides with the intuitive similarity\n8(w1, W2). However, scaling by /D instead of \u00a9 was shown by Levy et al. (2015) to be a better\nsolution in experiments.\nThe main idea of Riemannian optimization (Udriste (1994)) is to consider (6) as a constrained op\ntimization problem. Assume we have an approximated solution X; on a current step of the opti\nmization process, where i is the step number. In order to improve X;, the next step of the stan\ndard gradient ascent outputs X; + VF(X;), where VF(X;) is the gradient of objective F at th\npoint X;. Note that the gradient VF(X;) can be naturally considered as a matrix in R\u201d*\". Poin\nX; + VF(X;) leaves the manifold M4, because its rank is generally greater than d. That is why\nRiemannian optimization methods map point X; + VF'(X;) back to manifold M,. The standar\nRiemannian gradient method first projects the gradient step onto the tangent space at the curren\npoint X; and then retracts it back to the manifold:\nwhere R is the retraction operator, and P7,, is the projection onto the tangent space.\nIn our paper, we use a much simpler version of such approach that retracts point X; + VF(X;)\ndirectly to the manifold, as illustrated on Figure 1: X;.; = R(X; + VF(X;)).\nXig1 = R(PH, (Xi + VE(Xi))),\nFigure 1: Geometric interpretation of one step of projector-splitting optimization procedure: th\ngradient step an the retraction of the high-rank matrix X; + VF'(X;) to the manifold of low-ranl\nmatrices M,.\nIntuitively, retractor R finds a rank-d matrix on the manifold M4 that is similar to high-rank ma-\ntrix X; + VF'(X;) in terms of Frobenius norm. How can we do it? The most straightforward way to\nreduce the rank of X; + VF(X;) is to perform the SVD, which keeps d largest singular values of it:\nwhich means that the gradient will be large if S is close to singular. The projector-splitting scheme\nis free from this problem.\nIn case of SGNS objective given by (5), an element of gradient V F' has the form:\nETOH) |\n\n(VF(X) ine = A (tu)\n\n(w,\u00a2) +o (-tw,6) \u2014\nThe whole optimization procedure is summarized in Algorithm 1.\nHowever, it is computationally expensive. Instead of this approach, we use the projector-splitting\nmethod (Lubich & Oseledets (2014)), which is a second-order retraction onto the manifold (for\ndetails, see the review by Absil & Oseledets (2015)). Its practical implementation is also quite\nintuitive: instead of computing the full SVD of X; + VF(X;) according to the gradient projection\nmethod, we use just one step of the block power numerical method (Bentbib & Kanber (2015))\nwhich computes the SVD, what reduces the computational complexity.\nIn this way, we always keep the solution 4,4; = Uj415;41V;4, on the manifold Mz, and in the\nform (8).\nWhat is important, we only need to compute V F'(X;), so the gradients with respect to U, S and V\nare never computed explicitly, thus avoiding the subtle case where S' is close to singular (so-called\nsingular (critical) point on the manifold). Indeed, the gradient with respect to U (while keeping the\northogonality constraints) can be written (Koch & Lubich (2007)) as:\nAlgorithm 1 Riemannian Optimization for SGNS\nRequire: Dimentionality d, initialization Wo and Co, step size A, gradient function VF : R\"\u2122*\u2122 >\n\nR\u201d\"*\u2122, number of iterations Kv\n\nEnsure: Factor WeR\u2122\u00a2\n\n2 Xoo WoC # get an initial point at the manifold\nUo, So, Vo! <\u2014 SVD(Xo) # compute the first point satisfying the low-rank constraint\n1-0\nwhile i < K do\nUj41, Sina < QR ((X; + AVF(Xi))Vi) # perform one step of the block power method\nwith two QR-decompositions\nVier Sika \u2014 QR (Xi + AVF(Xi)) \"Ui41)\nXigi ce Uis Sig Vii # update the point at the manifold\nteitl\nend while\nU,d,V' \u00a9 SVD(Xx)\n>WeUuVvs # compute word embeddings\n: return W"}, {"section_index": "6", "section_name": "ING MODELS", "section_text": "We compare our method (\u201cRO-SGNS\u201d in the tables) performance to two baselines: SGNS embed-\ndings optimized via Stochastic Gradient Descent, implemented in the original \u201c\u201cword2vec\u201d, (\u201cSGD-\nSGNS\u201d in the tables) by Mikolov et al. (2013) and embeddings obtained by SVD over SPPMI matrix\n(\u201cSVD-SPPMI\u201d in the tables) by Levy & Goldberg (2014). We have also experimented with the\nblockwise alternating optimization over factors W and C, but the results are almost the same to SGD\nresults, that is why we do not to include them into the paper. The source code of our experiments is\navailable online! .\nThe models were trained on English Wikipedia \u201cenwik9\u201d corpus\u201d, which was previously used in\nmost papers on this topic. Like in previous studies, we counted only the words which occur more\nthan 200 times in the training corpus (Levy & Goldberg (2014); Mikolov et al. (2013)). As a result,\nwe obtained a vocabulary of 24292 unique tokens (set of words Vi and set of contexts Vo are\nequal). The size of the context window was set to 5 for all experiments, as it was done by Levy &\nGoldberg (2014); Mikolov et al. (2013). We conduct two series of experiments: for dimensionality\nd = 100 and d = 200.\nOptimization step size is chosen to be small enough to avoid huge gradient values. However, thor-\nough choice of does not result in a significant difference in performance (this parameter was tuned\non the training data only, the exact values used in experiments are reported below)."}, {"section_index": "7", "section_name": "4.2 EVALUATION", "section_text": "We evaluate word embeddings via the word similarity task. We use the following popular dataset:\nfor this purpose: \u201cwordsim-353\u201d (Finkelstein et al. (2001); 3 datasets), \u201c\u2018simlex-999\u201d (Hill et al\n(2016)) and \u201cmen\u201d (Bruni et al. (2014)). Original \u201cwordsim-353\u201d dataset is a mixture of the worc\npairs for both word similarity and word relatedness tasks. This dataset was split (Agirre et al. (2009)\ninto two intersecting parts: \u201cwordsim-sim\u201d (\u201cws-sim\u2019\u201d in the tables) and \u201cwordsim-rel\u201d (\u201cws-rel\u201d it\nthe tables) to separate the words from different tasks. In our experiments, we use both of them or\na par with the full version of \u201cwordsim-353\u201d (\u201cws-full\u201d in the tables). Each dataset contains worc\npairs together with assessor-assigned similarity scores for each pair. As a quality measure, we us\u00a2\nSpearman\u2019s correlation between these human ratings and cosine similarities for each pair. We cal\nthis quality metric linguistic in our paper.\nTable 1: Comparison of SGNS values obtained by the models. The larger is better.\nDim. d Algorithm ws-sim ws-rel ws-full simlex men\nSGD-SGNS 0.719 0.570 0.662. 0.288 0.645\nd= 100 SVD-SPPMI 0.722 0.585 0.669 0.317 0.686\nRO-SGNS 0.729 0.597 0.677 0.322 0.683\nSGD-SGNS 0.733 0.584 0.677 0.317 0.664\nd= 200 SVD-SPPMI 0.747 0.625 0.694 0.347 0.710\nRO-SGNS 0.757 0.647 0.709 0.353 0.701\nTable 2: Comparison of the methods in terms of the semantic similarity task. Each entry represents\nthe Spearman\u2019s correlation between predicted similarities and the manually assessed ones.\nWe see that SGD-SGNS and SVD-SPPMI methods provide quite similar results, however, the pro\nposed method obtains significantly better SGNS values, what proves the feasibility of using Rie\nmannian optimization framework in SGNS optimization problem. It is interesting to note that SVD\nSPPMI method, which does not optimize SGNS objective directly, obtains better results than SGD\nSGNS method, which aims at optimizing SGNS. This fact additionally confirms the idea describe:\nin Section 2.2.2 that the independent optimization over parameters W and C' may decrease the per\nformance.\nHowever, the target performance measure of embedding models is the correlation between semantic\nsimilarity and human assessment (Section 4.2). Table 2 presents the comparison of the methods ir\nterms of it. We see that our method outperforms the competitors on all datasets except for \u201cmen\u2019\ndataset where it obtains slightly worse results. Moreover, it is important that the higher dimension\nentails higher performance gain of our method in comparison to the competitors.\nIn order to understand how exactly our model improves or degrades the performance in comparison\nto the baseline, we found several words, whose neighbors in terms of cosine distance change signif-\nicantly. Table 3 demonstrates neighbors of words \u201cfive\u201d, \u201che\u201d and \u201cmain\u201d in terms of our model and\nits nearest competitor according to the similarity task \u2014 SVD-SPPMI. These words were chosen\nas representative examples whose neighborhoods in terms of SVD-SPPMI and RO-SGNS models\nare strikingly different. A neighbour of a source word is bold if we suppose that it has a similar\nsemantic meaning to the source word. First of all, we notice that our model produces much better\nneighbors of the words describing digits or numbers (see word \u201cfive\u201d as an example). The similar\nsituation happens for many other words, e.g. in case of word \u201cmain\u201d \u2014 the nearest neighbors con-\ntain 4 similar words in case of our model instead of 2 in case of SVD-SPPMI. The neighbourhood\nof word \u201che\u201d contains less semantically similar words in case of our model. However, it filters out\ncompletely irrelevant words, such as \u201cpromptly\u201d and \u201cdumbledore\u2019\u2019.\nTalking about the optimal number KC of iterations in the optimization procedure and step size 4.\nwe found that they depend on the particular value of dimensionality d. For d = 100, we have\nK = 25,\u2019 = 5-1075, and for d = 200, we have K = 13,4 = 10~4. Moreover, it is interesting\nthat the best results were obtained when SVD-SPPMI embeddings were used as an initialization of\nRiemannian optimization process.\nSkip-Gram Negative Sampling was introduced by Mikolov et al. (2013). The \u201cnegative sampling\u201d\napproach was thoroughly described by Goldberg & Levy (2014), and the learning method is ex-\n| [_d=100 [| d@=300 J\n\nSGD-SGNS | \u20141.68- 10\u00b0 =1.67- 10\u00b0\nSVD-SPPMI | \u20141.65- 10\u00b0 1.65 - 10\u00b0\nRO-SGNS \u2014~1.44-10\u00b0 | \u20141.43-10\u00b0\nfive he main\n\nSVD-SPPMI RO-SGNS SVD-SPPMI RO-SGNS SVD-SPPMI RO-SGNS\nNeighbors Dist. | Neighbors Dist. |] Neighbors Dist. | Neighbors Dist. || Neighbors Dist. | Neighbors Dist.\n1b 0.748 four 0.999 she 0.918 when 0.904 major 0.631 major 0.689\nkg 0.731 three 0.999 was 0.797 had 0.903 busiest. (0.621 | important 0.661\nmm 0.670 six 0.997 || promptly 0.742 was 0.901 || principal 0.607 line 0.631\nmk 0.651 seven 0.997 having 0.731 who 0.892 nearest 0.607 | external ~\u2014\u2014(0.624\nIbf 0.650 eight 0.996 || dumbledore 0.731 she 0.884 || connecting 0.591 | principal 0.618\nper 0.644 and 0.985 him 0.730 bv 0.880 linking 0.588 | primary 0.612\nTable 3: Examples of the semantic neighbors obtained for words \u201cfive\u201d, \u201che\u201d and \u201cmain\u201d by our\nmethod and SVD-SPPMI.\nplained by Rong (2014). There are several open-source implementations of SGNS neural networ!\nwhich is widely known as \u201cword2vec\u201d *+.\nAs shown in Section 2.2, Skip-Gram Negative Sampling optimization can be reformulated as a\nproblem of searching for a low-rank matrix. In order to be able to use out-of-the-box SVD for this\ntask, Levy & Goldberg (2014) used the surrogate version of SGNS as the objective function. There\nare two general assumptions made in their algorithm that distinguish it from the SGNS optimization:\nThis makes the objective not interpretable in terms of the original task (3). As mentioned by Levy &\nGoldberg (2014), SGNS objective weighs different (w, c) pairs differently, unlike the SVD, which\nworks with the same weight for all pairs, what may entail the performance fall. The comprehen-\nsive explanation of the relation between SGNS, SPPMI, SVD-over-SPPMI methods is provided by\nKeerthi et al. (2015). Lai et al. (2015); Levy et al. (2015) give a good overview of highly practical\nmethods to improve these word embedding models.\nAn introduction to optimization over Riemannian manifolds can be found in the paper of Udrist\n(1994). The overview of retractions of high rank matrices to low-rank manifolds is provided by Ab\nsil & Oseledets (2015). The projector-splitting algorithm was introduced by Lubich & Oseledet\n(2014), and also was mentioned by Absil & Oseledets (2015) as \u201cLie-Trotter retraction\u2019.\nRiemannian optimization is succesfully applied to various data science problems: for example, ma-\ntrix completion (Vandereycken (2013)), large-scale recommender systems (Tan et al. (2014)), and\ntensor completion (Kressner et al. (2014))."}, {"section_index": "8", "section_name": "] CONCLUSIONS AND FUTURE WORK", "section_text": "It seems to be an interesting direction of future work to apply more advanced optimization tech.\nniques to Step 1 of the scheme proposed in Section | and to explore the Step 2 \u2014 obtaining embed.\ndings with a given low-rank matrix.\n*Original Google word2vec: https: //code.google.com/archive/p/word2vec/\n4Gensim word2vec: https: //radimrehurek.com/gensim/models/word2vec.htm:\n1. SVD optimizes Mean Squared Error (MSE) objective instead of SGNS loss function.\n\n2. In order to avoid infinite elements in SPMI matrix, it is transformed in ad-hoc manner\n(SPPMI matrix) before applying SVD.\n[n our paper, we proposed the general two-step scheme of training SGNS word embedding model\nand introduced the algorithm that performs the search of a solution in the low-rank form via Rie-\nmannian optimization framework. We also demonstrated the superiority of the proposed method, by\nproviding the experimental comparison to the existing state-of-the-art approaches."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. Multimodal distributional semantics. J. Artif\nIntell. Res.(JAIR), 49(1-47), 2014.\nLev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and\nEytan Ruppin. Placing search in context: The concept revisited. In WWW, pp. 406-414, 2001.\nDaniel Kressner, Michael Steinlechner, and Bart Vandereycken. Low-rank tensor completion by\nriemannian optimization. BIT Numerical Mathematics, 54(2):447\u2014-468, 2014.\nSiwei Lai, Kang Liu, Shi He, and Jun Zhao. How to generate a good word embedding? arXi\npreprint arXiv:1507.05523, 2015.\nXin Rong. word2vec parameter learning explained. arXiv preprint arXiv: 1411.2738, 2014.\nMingkui Tan, Ivor W Tsang, Li Wang, Bart Vandereycken, and Sinno Jialin Pan. Riemannian pursuit\nfor big matrix recovery. In JCML, volume 32, pp. 1539-1547, 2014.\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representa-\ntions of words and phrases and their compositionality. In NJPS, pp. 3111-3119, 2013.\nTobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. Evaluation methods for\nunsupervised word embeddings. In EMNLP, 2015."}]
SywUHFcge
[{"section_index": "0", "section_name": "A THEORETICAL FRAMEWORK FOR ROBUSTNESS OF (DEEP\nCLASSIFIERS AGAINST ADVERSARIAL EXAMPLES", "section_text": "Beilun Wang, Ji Gao, Yanjun Qi\nDepartment of Computer Science\nUniversity of Virginia\nCharlottesville, VA 22901, USA\nBeilun Wang, Ji Gao, Yanjun Qi\nDepartment of Computer Science\nUniversity of f Virginia\n(bw4mw, jg6yd, yanjun}@virginia.edt\nMost machine learning classifiers, including deep neural networks, are vulnerable\nto adversarial examples. Such inputs are typically generated by adding smal!\nbut purposeful modifications that lead to incorrect outputs while imperceptible tc\nhuman eyes. The goal of this paper is not to introduce a single method, but tc\nmake theoretical steps towards fully understanding adversarial examples. By using\nconcepts from topology, our theoretical analysis brings forth the key reasons why\nan adversarial example can fool a classifier (f1) and adds its oracle (f2, like human\neyes) in such analysis. By investigating the topological relationship between twc\n(pseudo)metric spaces corresponding to predictor f; and oracle f2, we develop\nnecessary and sufficient conditions that can determine if f; is always robust (strong\nrobust) against adversarial examples according to f2. Interestingly our theorems\nindicate that just one unnecessary feature can make f; not strong-robust, and the\nright feature representation learning is the key to getting a classifier that is both\naccurate and strong robust."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep Neural Networks (DNNs) can efficiently learn highly accurate models and have been demon:\n\nstrated to perform exceptionally well (Krizhevsky et al.|/2012] 2014). However, recent\n\nstudies show that intelligent attackers can force many machine learning models, including DNNs, to\nmisclassify examples by adding small and hardly visible modifications on a regular test sample.\nThe maliciously generated inputs are called \u201cadversarial examples\u201d\nand are commonly crafted by carefully searching small perturbations through an\noptimization procedure. Several recent studies proposed algorithms for solving such optimization to\nfool DNN classifiers. firstly observe that convolution DNNs are vulnerable\nto small artificial perturbations. They use box-constrained Limited-memory BFGS (L-BFGS) to\ncreate adversarial examples and find that adversarial perturbations generated from one DNN network\ncan also force other networks to produce wrong outputs. Then, try to\nclarify that the primary cause of such vulnerabilities may be the linear nature of DNNs. They then\npropose the fast gradient sign method for generating adversarial examples quickly. Subsequent papers\n(Fawzi et al.|{2015} [Papernot et al.|[2015a| [2015) have explored other ways to explore\nadversarial examples for DNN (details in Section |2.1). The goal of this paper is to analyze the\nrobustness of machine learning models in the face of adversarial examples.\nIn response to progress in generating adversarial examples, researchers attempt to design strategies for\nmaking machine-learning systems robust to various noise, in the worst case as adversarial examples.\nFor instance, denoising NN architectures (Vincent et al.|/2008} [Gu & Rigazio| [2014} Jin et al.|{2015)\ncan discover more robust features by using a noise-corrupted version of inputs as training samples.\nA modified distillation strategy is proposed to improve the robustness of\nDNNs against adversarial examples, though it has been shown to be unsuccessful recently (Carlini &\n(2016a). The most generally successful strategy to date is adversarial training\n2014} (2013) which injects adversarial examples into training to improve the\ngeneralization of DNN models. More recent techniques incorporate a smoothness penalty ("}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Table 1: A list of important notations used in the paper\nThis paper tries to answer above questions and makes the following contributions:\nSection[2] points out that previous definitions of adversarial examples for a classifier (f,) have\noverlooked the importance of an oracle function (2) of the same task.\n\nSection[3|formally defines when a classifier f; is always robust (\"strong-robust\") against adversarial\nexamples. It proves four theorems about sufficient and necessary conditions that make f; always\nrobust against adversarial examples according to f2. Our theorems lead to a number of interesting\ninsights, like that the feature representation learning controls if a DNN is strong-robust or not.\nSection|12]is dedicated to provide practical and theoretically grounded directions for understanding\nand hardening DNN models against adversarial examples.\nTable[I| provides a list of important notations we use in the paper.\nfi A learned machine learning classifier fj = c1 0 91.\n\nfo The oracle for the same task (see Definition ) fo = \u20ac2 0 ge.\n\nGi Part of f; including operations that progressively transform input into a new\nform of learned representations in X;.\n\nCj Part of f; including simple decision functions (like linear) for classifying.\n\nxX Input space (e.g., {0, 1, 2,... , 255}92*32*3 for CIFAR-10 data\n& Hinton! 2009).\n\nY utput space (e.g., {1,2,3,..., 10} for CIFAR-10 data\n2008).\n\nXy \u2018eature space defined by the feature extraction module g; of predictor f}.\n\nXo Feature space defined by the feature extraction module go of oracle f2.\n\ndiC,-) The metric function for measuring sample distances in feature space X; with\nrespect to predictor f}.\n\nda(-,-) The metric function for measuring sample distance in feature space X2 with\nrespect to oracle fo.\n\ndC.) The Pseudometric function with respect to predictor fi, dj(z,x\u2019) =\ndi(gi(x), gi (x\u2019)).\n\nd5(-,-) The Pseudometric function with respect to oracle f2, d3(z,2\u2019) =\nd2(go(x), g2(x\u2019)).\n\nae. almost everywhere (Folland][2013); (defined by Definition 9.2) in Section[9.Tp\n\n\u20ac, 01, 62, 0,1\n\nsmall positive constants"}, {"section_index": "3", "section_name": "et al} 2016} Zheng et al.| 2016) or a layer-wise penalty (Carlini & Wagner}|2016b) as a regularization\n\nterm in the loss function to promote the smoothness of the DNN model distributions.", "section_text": "Recent studies (reviewed by (Papernot et al.| 2016b)) are mostly empirical and provide little under-\n\nstanding of why an adversary can fool machine learning models with adversarial examples. Several\nimportant questions have not been answered vet:\ne What makes a classifier always robust to adversarial examples?\n\ne Which parts of a classifier influence its robustness against adversarial examples more, compared\nwith the rest?\n\nWhat is the relationship between a classifier\u2019s generalization accuracy and its robustness against\nadversarial examples?\n\nWhy (many) DNN classifiers are not robust against adversarial examples ? How to improve?\nThis section provides a general definition of adversarial examples , by including the notion of an\noracle. For a particular classification task, a learned classifier is represented as f; : X \u2014 Y, where\nX represents the input sample space and Y is the output space representing a categorical set.\nVarious definitions of \u201cadversarial examples\u201d exist in the recent literature, with most following\nEq. {2.1). See more detailed reviews in Section|8] The basic idea is to generate a misclassified sample\n(x, a\")\n\nXx\n\n(X,d'2)\n\n1\n\nMachine-learning\n\nclassifier\n\n(X1,d,)\n\nClassification\n\n(X2, dz)\n\nfe\n\noy\nAN Oracle\n\nC2\n\nY\n\n7 i Cy\n\nO =\u2014_\u2014\n\nOo \u2014\nFigure 1: Example of a machine-learning classifier (predictor) and a human annotator (oracle) for\nclassifying images of hand-written \u201c0\u201d. Both include two steps: feature extraction and classification\nThe upper half is about the learned machine classifier f; and the lower half is about the oracle fo. fi\ntransforms samples from the original space X to an embedded metric space (Xj, d1) using its feature\nextraction step. Here, d; is the similarity measure on the feature space X1. Classification models\nlike DNN cover the feature extraction step in its model, though many other models like decision\ntree need hard-crafted or domain-specific feature extraction. Then f; can use a linear function to\ndecide the classification prediction Y \u20ac Y. Similarly, human oracle f2 transforms data samples from\nthe original space X into an embedded metric space (X2, dz) by its feature extraction. Here, d2 is\nthe corresponding similarity measure. Then the oracle get the classification result y \u20ac Y using the\nfeature representation of samples (Xo, d2).\nx\u2019 by \u201cslightly\u201d perturbing a correctly classified sample x, with an adversarial perturbation A(z, 2\u2019).\nFormally, when given 7 \u20ac X\nHere x, 2\u2019 \u20ac X. A(x, x\u2019) represents the difference between x and x\u2019, which depends on the specific\ndata type that x and x\u2019 belong tof] Table|2|summarizes different choices of f; and A(x, x\u2019) used in\nthe recent literature, in which norm functions on the original space X are mostly used to calculate\nA(z, x\u2019). Multiple algorithms have been implemented to solve Eq. { as a constrained optimizatior\n(summarized by the last column of Table 2p. More details are included for three such studies ir\n\nSection[8.2]\nWhen searching for adversarial examples, one important property has not been fully captured by\n\n). That is, an adversarial example has been modified very slightly from its seed and these\nmodifications can be so subtle that, for example in image classification, a human observer does not\neven notice the modification at all. We define the role of \u201chuman observer\u201d more formally as follows\nDefinition 2.1. An \u201cOracle\u201d represents a decision process generating ground truth labels for a tas!\nof interest. Each oracle is task-specific, with finite knowledge and noise-fre\n'For example, in the case of strings, A(a, x\u2019) represents the difference between two strings\n2We leave all detailed analysis of when an oracle contains noise as future work.\ns.t. fi(x) A fila\u2019)\n(e)\n(eo)\nMachine classifier f, \u00a9 oe\n\nClass 1\n\nClass 2\n\nAdversarial\nsample\n\nthe oracle fz\nMachine classifier f;\nFigure 2: An example showing that f, with one unnecessary feature (according to f2) is prone to\nadversarial examples. The red circle denotes an adversarial example (e.g. generated by some attack\nsimilar as JSMA (details in Section[8.2)). Each adversarial example is very\nclose to its seed sample in the oracle feature space (according to d2), but it is comparatively far from\nits seed sample in the feature space (according to d;) of the trained classifier and is at the different\nside of the decision boundary of f;. Essentially \u201cadversarial examples\u201d can be easily found for all\nseed samples in this Figure. We only draw cases for two seeds. Besides, for each seed sample, we\ncan generate a series of \u201cadversarial examples\u201d (by varying attacking power) after the attacking line\ncrosses the decision boundary of f;. We only show one case of such an adversarial example for each\nseed sample.\nTable 2: Summary of the previous studies defining adversarial examples.\nRandom forest and SVM\nPrevious studies fi A(z, z\u2019)] Formulation of fi (x) F fi(2\u2019)\n(Goodfellow et al.|/2014) Convolutional neural networks loo argmax Loss(fi(2\u2019), fi(z))\n\n(Szegedy et al.|[2013) Convolutional neural networks by azamin Loss(fi(a\u2019), 1), subject to: TF fi(a\u2019)\n(Biggio et al. 2013) Support vector machine (SVM) | @3 aren Loss(fi(2\u2019), \u20141), subject to: fi(z) = 1\n(Kantchelian et al.| 2015) Decision tree and Random forest | @2, \u20ac1, arin Loss(fi(z\u2019), \u20141), subject to: fi(x) = 1\n(Papernot etal. 2016a) Convolutional neural networks i argmax Loss(fi(2\u2019), fi(x))\n\n{Grosse etal. 2016) Convolutional neural networks Lo anemax Loss(fi(2\u2019), fi(2))\n\n(Xu et al. [2016] Random forest and SVM \u00a31, boo argmin Loss(fi(z\u2019), \u20141), subject to: fi(x) = 1\nThe goal of machine learning is to train a learning-based predictor function f; : X \u2014 Y to\napproximate an oracle classifier f2 : X \u2014 Y for the same classification task. For example, in image\n\nclassification tasks, the oracle f2 is often a group of human annotators. Adding the notation of oracle,\nwe revise Eq. (2.1) into:\ns.t. fi(x) # fila\u2019)\n\nAo(x,2') <\u20ac\n\nJ2(2) = fale\u2019)\nIllustrated in Figure|1| we denote the feature space an oracle uses to consider difference among\nsamples for the purpose of classification decision as X2. The sample difference uses a distance\nfunction dz in this space. An oracle function f2 : X \u2014 Y can be decomposed as f2 = c2 \u00a9 go where\ng2 : X \u2014\u00bb Xp\u00bb represents the operations for feature extraction from X to X2 and cp : X2 > Y\ndenotes the simple operation of classification in X2. Essentially go includes the operations that\n(progressively) transform input representations into an informative form of representations X9. cs\napplies relatively simple functions (like linear) in X2 for the purpose of classification. d2 is the metric\nfunction (details in Section) an oracle uses to measure the similarity among samples (by relying or\nrepresentations learned in the space X). We illustrate the modeling and decomposition in Figure[1\nIn Section]3|our theoretical analysis uses (Xp, dz) to bring forth the fundamental causes of adversarial\nexamples and leads to a set of novel insights to understand such examples. To the best of the authors\u2019\nknowledge, the theoretical analysis made by this paper has not been uncovered by the literature.\nModeling Oracle f2: One may argue that it is hard to model f2 and (X2, d2) for real applications,\nsince if such oracles can be easily modeled machine-learning based f; seems not necessary. In\nSection [8.3] we provide examples of modeling oracles for real applications. For many security-\nsensitive applications about machines, oracles f2 do exist] For artificial intelligence tasks like image\nclassification, humans are f\u00bb. As illustrated by cognitive neuroscience papers 2\n[DiCarlo et al.|{2012), human brains perform visual object recognition using the ventral visual stream,\nand this stream is considered to be a progressive series of visual re-representations, from V1 to V2\nto V4 to IT cortex (DiCarlo & Cox||2007). Experimental results support that human visual system\nmakes classification decision at the final IT cortex layer. This process is captured exactly by our\ndecomposition f2 = c2 0 go.\nNow we use the decomposition of f2 to rewrite A(x, x\u2019) as do(go(x), go(x\u2019)) in Eq.\nobtain our proposed general definition of adversarial examples:\nDefinition 2.2. adversarial example: Suppose we have two functions f, and f. f; : X \u2014 Y is the\nclassification function learned from a training set and fz : X \u2014 Y is the classification function of the\noracle that generates ground- truth labels for the same task. Given a sample x \u20ac X, an adversarial\nexample x' \u20ac X. ( satisfies Eq.\nst. fix) # fix\u2019)\nd2(g2(x), ga(x\")) < 52\nf(x) = fola\u2019)\nOracles fz do exist in many security-sensitive applications about machines. But machine-learning classifiers\nf; are used popularly due to speed or efficiency\nA\u00bb(x,x\u2019) < \u20ac reflects that adversarial examples add \u201csmall modifications\u201d that are almost imper.\nceptible to oracle of the task. Clearly calculating A(x, x\u2019) needs to accord to oracle f2. For most\nclassification tasks, an oracle does not measure the sample difference in the original input space\nX. We want to emphasize that sample difference is with regards to its classification purpose. For\ninstance, when labeling images for the hand-written digital recognition, human annotators do not\nneed to consider those background pixels to decide if an image is \u201c0\u201d or not.\nMost previous studies (Table[2) have made an important and implicit assumption about f2 (through\nusing A(x,2\u2019) < \u20ac): fz is almost everywhere (a.e.) continuous. We explains the a.e. continuity\nassumption and its indication in Section] Basically, when f2 is assumed continuous a.e.,\n\nP( falr) \u2014 fale\u2019\\\\dalaal(r) qalr\u2019)) <\u2014 6.) \u20141\nst. fi(a) A fi(a\u2019)\ndx(g2(x), g2(x\")) < da"}, {"section_index": "4", "section_name": "3.1 MODELING AND DECOMPOSING f;", "section_text": "As shown in Figure[I] we decompose f; in a similar way as the decomposition of f2. This is to\nanswer another key question: \u201cwhich parts of a learned classifier influence its robustness against\nadversarial examples more, compared with the rest?\u201d. A machine-learning classifier f = c1 \u00a9 gi.\nwhere g; : X \u2014 X; represents the feature extraction operations and c; : X; \u2014 Y performs a simple\noperation (e.g., linear) of classification. Section|8-4]provides multiple examples of decomposing\n\nstate-of-the-art f1|*] d; denotes the distance function f uses to measure difference among samples\nin X,.\nAlmost all popular machine learning classifiers satisfy the a.e. continuity assumption. It means:\nP(filx) A file\n\nv)| fo(x) =\n3.2 {d55,7}-STRONG-ROBUST AGAINST ADVERSARIAL EXAMPLES\nVa,a' \u20ac X\nP(fi(x) = fil2\u2019)|fo(x) = fale\u2019),\ndo(g2(x), 92(x\u2019)) < 62) > 1-1\nEq. defines the \u201c{52, 7}-strong- -robustness\u201d as a claim with the high probability. To simplify\nnotations, in the rest of this paper, we use \u201cstrong-robust\u201d representing \u201c{62,7}-strong-robust\u201d. Also\nin the rest of this paper we propose and prove theorems and corollaries by using its more general\nform by Eq. (3.2). For all cases, if f2 is continuous a.e., all proofs and equations can be simplified\n\nby using only the term d2(g2(x), g2(2\u2019)) < 62 (i.e. removing the term f(x) = f2(2\u2019)) according to\n\nEq. (B.3)).\nThe \u201cstrong-robustness\u201d definition leads to four important theorems in next two subsectio:\n\u201cNotice that gi may also include implicit feature selection steps like \u00a2; regularization.\n\nBoundary points are those points satisfying fi(x) 4 fi(x\u2019) and di(gi(x), g1(2\")) < 51)\n\n\u00b0When f; is continuous a.e., P(fi(x) 4 fi(x\u2019)|di(gi(x), g1(a\u2019)) < 61) = 0.\n\n7B oundarv points .d adversarial examples\u201d only attack seed samples who are boundary points c\nWith a more accurate definition of \u201cadversarial examples\u201d, now we aim to answer the first central\nquestion: \u201cWhat makes a classifier always robust against adversarial examples?\u201d. Section\ndefines the concept \u201cstrong-robust\u201d describing a classifier always robust against adversarial examples.\n\nSection [3.3] and Section [3.4] present sufficient and necessary conditions for \u201cstrong-robustness\u201d\u2019.\nSection|4|then provides a set of theoretical insights to understand \u201c\u2018strong-robustness\u201d.\nFor the rare cases that f; is not continuous a.e., Section|1 1]discusses \"boundary points\" of f;\n\nRoughly speaking, when f; is not continuous a.e. ad\nP(t. (\u00bb\\) 4 \u00a2 (a\\I 7 /.. =e) a. (e\\)\\) - \u00a3.)VN0\nWe then apply reverse-thinking on Definition G2) and derive the following definition of strong-\nrobustness for a machine learning classifier against adversarial examples:\nDefinition 3.1. {62,}-Strong-robustness of a machine-learning classifier: A machine-learning\nclassifier f1(-) is {62,n}-strong-robust against adversarial examples if: Vx,x' \u20ac X a.e., (x,x')\nsatisfies Eq.\n3.3. TOPOLOGICAL EQUIVALENCE OF TWO METRIC SPACES (Xj, d1) AND (Xo, dz) IS\nSUFFICIENT IN DETERMINING STRONG-ROBUSTNESS\nIf the topological equivalence ( Eq. (10.1)) exists between (X1,d) and (X2, dg), it means that for\nall pair of samples from X, we have the following relationship:\ndi(gi(), 91(a\")) < 51 = do(go(x), go(x\u2019)) < 6:\ndy(gi(x), g1(x\")) < 61 \u00a9 do(go(x), go(a\u2019)) < do\nTheorem 3.2. When f is continuous a.e., if (X1,d1) and (X2,d2) are topologically equivalent,\nthen the learned classifier {\\(-) is strong-robust to adversarial examples.\nProof. See its proofs in Section[10.3.4\nP(fi(x) = fi(2\")|fo(z) = fal\u2019),\n3.4 FINER TOPOLOGY OF (X, d{,) THAN (X, d5) IS SUFFICIENT AND NECESSARY IN\nDETERMINING STRONG-ROBUSTNESS\nVa,u \u20ac X,\nd2(g2(x), g2(a\u2019)) < 52 => di(gi(x), gi(a\u2019)) < 1\nUsing Eq. (3.7) and the continuity a.e. assumption, we can derive the following Theorem about the\nsufficient and necessary condition for f; being strong-robust:\nTheorem 3.4. When f, is continuous a.e., f1 is strong-robust against adversarial examples if and\nonly if the topology in (X,d\u2018,) is a finer topology than the topology in (X, di).\nIn the appendix, Section|I0.I]briefly introduces the concept of metric space and the definition of\ntopological equivalence among two metric spaces. As shown in Figure[I| here f, defines a metric\nspace (X;,d;) on X, with the metric function d,. Similarly fz defines a metric space (X2, dz) on\nX\u00bb with the metric function do.\nWhen /f; is continuous a.e., this can get us the following important theorem, indicating that the\ntopological equivalence between (X,,d,) and (X\u00bb,dz) is a sufficient condition in determining\nwhether or not f; is strong-robust against adversarial examples:\nFor more general cases including f; might not be continuous a.e., we need to consider the probability\nof the boundary point attacks (Ea. (3.1)). Therefore. we get a more general theorem as follows:\nNow we extend the discussion from two metric spaces into two pseudometric spaces. This extension\nfinds the sufficient and necessary condition that determines the strong-robustness of f;. The related\ntwo pseudometrics are di, (for f,) and d (for fz), both directly being defined on X. Appendix Sec-\ntion[T0.2}includes detailed descriptions of pseudometric, pseudometric spaces, topology and a finer\ntopology relationship between two pseudometric spaces.\nTable 3: Summary of theoretical conclusions that we can derive. Here X; = R\u2122 and X2 = R\u201d.\nThe strong-robustness is determined by feature extraction function g;. The accuracy is determined by\nboth the classification function c; and the feature extraction function g1.\nTheorem 3.5. When fi is not continuous a.e., if the Ween in (x, ih )) is a ae topol\n\nogy than the topology in (X,d3) and P(fi(2) # filt\")|fole) = fale\u2019), dr(gi(a),ai(e\u2019)) <\n61, do(go(x), go(x\u2019)) < 62) <n, then f, is strong-robust against ) ore} \u2018examples.\nWhen f; is not continuous a.e., its strong-robustn\nand therefore relates to the c; function. Sectio:\nsuch cases in the rest of this paper.\n\nis significantly influenced by its boundary points\nprovides some discussion and we omit covering\nCorollary 4.1. When f; is continuous a.e., if X; = R\u2122, Xz = R\u2122, ny > no, Xo C Xi, did\nare norm functions, then f,(-) is not strong-robust against adversarial examples.\nThis corollary shows if unnecessary features (with regards to X2) are selected in the feature selection\nstep, then no matter how accurate the model is trained, it is not strong-robust to adversarial examples\nFigure[2|shows a situation that the oracle for the current task only needs to use one feature to classify\nsamples correctly. A machine learning classifier extracts two features with one used by the oracle\nand the other is an extra unnecessary feature[?] In Xj, fi (actually c,) successfully classifies all the\ntest inputs. However, it\u2019s very easy to find adversary examples satisfying Eq. (2\nsmall perturbation along the unnecessary feature dimension. In Figure red circles show a few such\nadversarial examples. The adversarial examples are very close to seed samples in the oracle space\nBut they are predicted into a different class by f1.\nFor many security sensitive applications, previous studies using state-of-art learning-based classifiers\nnormally believe that adding more features is always helpful. Apparently, our corollary indicates that\nCases: d,& dy are norms Can be accurate? Based on piustration\n() X1 \\(Xif) X2) ZG, | Not Strong-robust may not be accurate Theorem (3.4 Figure[2|\n\nX2 EX\ndD | mi > m2,X2 OX Not strong-robust may be accurate Corollary (4.1) | Figure[2] |\n(ID | my = n2,X1 = X2 Strong-robust may be accurate Corollary (4.2) | Figure/4] |\n(IV) | ny < ng, X1 C Xe Strong-robust may not be accurate Theorem (3.4) | Figure |\nWhen ff; is not continuous a.e.. we need to consider the probability of the boundary points based\nadversarial examples (Eq. (3.1)). For such a case, we get a sufficient condition | for the strong-\nrobustness:\nThe four theorems proposed above lead to a set of key insights about why and how an adversarial can\nfool a machine-learning classifier using adversarial examples. One of the most valuable insights is:\nfeature learning step decides whether a predictor is strong-robust or not in an adversarial test setting.\nAll the discussions in the subsection assume f/f; is continuous a.e..\nTheorem (3.2) and Theorem indicate that when f; is continuous a.e., the two feature spaces\n(X1,d;) and (X2, dz) or the functions g; and g2 determine the strong-robustness of f;. Based on\nTheorem , we can derive a corollary as follows (proof in Section[10.3.1)"}, {"section_index": "5", "section_name": "this thinking is wrong and can lead to their classifiers vulnerable to adversarial examples(Xu et al.\n\n1016).", "section_text": "Using Theorem (3.3), we obtain another corollary as follows (proof in Section|I\nThis corollary shows that if a learned classifier and its oracle share the same derived feature space\n(X, = Xo), the learned classifier is strong-robust when two metrics are both norm functions (even if\nnot the same norm). We can call this corollary as \u201cnorm doesn\u2019t matter\u2019.\nMany interesting phenomena can be answered by Corollary (42). For instance, for a norm regularized\nclassifier, this corollary answers an important question that whether a different norm function will\ninfluence its robustness against adversarial examples. The corollary indicates that changing to a\ndifferent norm function may not improve the robustness of the model under adversarial perturbation.\nSummarizing Theorem ( 4.2) and Corollary (4\nlearned classifier is decided by two factors: (1) the difference between two derived feature spaces:\nand (2) the difference between the metric functions. Two corollaries show that the difference between\nthe feature spaces is more important than the difference between the two metric functions."}, {"section_index": "6", "section_name": "4.3 ROBUSTNESS AND GENERALIZATION", "section_text": "In Table [3] we provide four situations in which the proposed theorems can be used to determin\nwhether a classifier f; is strong-robust against adversarial examples or not.\nTable B]provides a much better understanding of the relationship between robustness and accuracy.\nTwo interesting cases from Table B]are worth to emphasize again: (1) If f; misses features used by\nf2 and does not include unnecessary features (according to X2), f1 is strong-robust (even though it\nmay not be accurate). (2) If f; extracts some extra unnecessary features, it will not be strong-robust\n(though it may be a very accurate predictor).\nWe want to emphasize that \u201cf; is strong-robust\u201d does not mean it is a good classifier. For example, \u00a2\ntrivial example for strong-robust models is f\\(2) = 1, Vx \u20ac X. However, it is a useless model since\nit doesn\u2019t have any prediction power. In an adversarial setting, we should aim to get a classifier that i:\nboth strong-robust and precise. A better feature learning function g; is exactly the solution that maj\nachieve both goals.\nTable|3]indicates that c, and_cz do not influence the strong-robustness of f; when f, is continuous\na.e. [I Figure|4/and Figure|5|further show two concrete example cases in which f;, is strong-robust\naccording to fz. However, in both figures, f; is not accurate according to fo.\nAs another example, multiple DNN studies about adversarial examples claim that adversarial examples\nare transferable among different DNN models. This can be explained by Figure[2](when X,isa\nmuch higher-dimensional space). Since different DNN models learn over-complete feature spaces\n{X,}, there is a high chance that these different X, involve a similar set of unnecessary features\ne.g., the different learned features are correlated with others). Therefore the adversarial examples are\ngenerated along similar gradient directions. That is why many such samples can evade multiple DNN\nmodels.\nCase (1): If f; uses some unnecessary features, it will not be strong-robust to adversarial examples.\nIt may not be an accurate predictor if f; misses some necessary features used by f2.\n\nCase (II): If f; uses some unnecessary features, it will not be strong-robust to adversarial examples.\nIt may be an accurate predictor if f; uses all the features used by fo.\n\nCase (IID): If f; and f2 use the same set of features and nothing else, f) is strong-robust and may\nbe accurate.\n\nCase (IV): If {1 misses some necessary features and does not extract unnecessary features, f1 is\nstrong-robust (even tough its accuracy may not be good).\nrandom perturbation\nFor DNN, it is difficult to derive a precise analytic form of d; (or d{). But we can observe some\nproperties of d; through experimental results. Table[5] [|Table [B]Table[T}and Table[8]show properties of\nd, (and d\u2018) resulting from performing testing experiments on four state-of-art DNN networks (details\nin Section[I2.1). All four tables indicate that the accuracy of DNN models in the adversarial setting\nare quite bad. The performance on randomly perturbed inputs is much better than performance on\nmaliciously perturbed adversarial examples.\nDifferently, for human oracles, a sphere in (X, d5) (shown in Figure[3|(I1)) or in (X2, dz) (shown\nin Figure|3|(IV)) corresponds to an ellipsoid in (X, || - ||) not including very-thin directions (shown\nin Figure]3|(VD). When the attackers try to minimize the perturbation size using the approximated\ndistance function dz = || - ||, the thin direction of ellipsoid in Figure[3](V) is exactly the adversarial\ndirection."}, {"section_index": "7", "section_name": "5.2 TOWARDS PRINCIPLED SOLUTIONS", "section_text": "Our theorems suggest a list of possible solutions that may improve the robustness of DNN classifier\nagainst adversarial samples. Options include such as:\nBy learning a better g,: Methods like DNNs directly learn the feature extraction function g,. Table/4\nsummarizes multiple hardening solutions (Zheng et al.| 2016} Miyato et al. 2016} Lee et al.| 2015\nin the DNN literature. They mostly aim to learn a better g; by minimizing different loss functions\nLy, (x, x\u2019) so that when d2(g2(x), g2(ax\u2019)) < \u20ac (approximated by (X, || - ||)), this loss Ly, (x, x\u2019) is\nsmall. Two major variations exist among related methods: the choice of Ly, (x, x\u2019) and the way\nto generate pairs of (, x\u2019). For instance, to reach the strong-robustness we can force to learn a gj\nthat helps (X, d/) to be a finer topology than (X32, d/,). Section| explores this option (\u201cSiamese\ntraining\u201d in Table/4p through Siamese architecture. Experimentally ection[I2.5]compares adversarial\ntraining, stability training and Siamese training on two state-of-the-art DNN image-classification\nTable 4: Connecting to relevant DNN hardening solutions. The experimental results of comparing\ndifferent hardening solutions are shown in Figure]9] Figure10| Table/IO]and Table|T1]\na Loss Ly, (x, 2\u2019) On Layer\n\nStability training (Zheng] ] random perturbation KL(fi(2), fila\u2019) Classification layer\n2016}\n(Miyato et al-|/2016) adversarial perturbation | KL(fi(x), fi(2\u2019)) Classification layer\nAdversaria train- | adversarial perturbation | L(fi(2\u2019) fata)) Loss function\ning(Goodfellow_et_al.\n2014}\n\narge Adversarial train- | adversarial perturbation | L(f1(2\u2019), fo(z)) Loss function\ning(Kurakin et al|(2016)\n(Lee et al {[2015) \u00ab| _ adversarial perturbation | || gi(x) \u2014 gi(#\u2019) lz | Layer before classification\n\nlayer\n\nSiamese Training random perturbation ] 91 (2) \u2014 gi\u201d) 2. \u2018| Layer before classification\n\nlayer\nLayer before classification\nlayer\nlayer\n\nLayer before classification\nlayer\nOur theoretical analysis uncovers fundamental properties to explain the adversarial examples. In this\nsection, we apply them to analyze DNN classifiers. More specifically, (1) we find that DNNs are not\nstrong-robust against adversarial examples; and (ii) we connect to possible hardening solutions and\nintroduce principled understanding of these solutions.\nThe phenomenon we observed can be explained by Figure[3] Comparing the second column and\nthe third column in four tables we can conclude that d; (and d{) in a random direction is larger\nthan dj (and d{) in the adversarial direction. This indicates that a round sphere in (X,,d,) (and\n(X, d{,)) corresponds to a very thin high-dimensional ellipsoid in (X, || - ||) (illustrated by the left half\nof Figure). Figure[3](I) shows a sphere in (X, d\u2018,) and Fare shows a sphere in (Xj, d1).\nThey correspond to the very thin high-dimensional ellipsoid in (X, || - ||) in Figure[3](V). The norm\nfunction || - || is defined in space X and is application-dependent. All four tables uses || - |] = || - ||oo.\nM (x,d,')\n\nChee 0\n\nN\nN 7\nd,'(a,b) small~ \u2014 ~\n\n\u00bb eo @ , Deep Neural Nets \u00b0 Human oracle\n\n7\n\nN\nd,'(a,b) Larges ~~ -\n\nill = 68\nee a\n/ \u2018\n/ @\u00ae @e \\\nFarl 1\nq eo \u00b0@ 4 Deep Neural Nets\n\n7\n\nN\nd,(a,b) Large ~~ \u2014- -\n\n@ Class 1\n\nClass 2\n\n\u00a2\nOE cass 3 /\n\nx f&\nB 8 ,d;')\n| |\n\n/ nN Not a Finer\n\npa .-.! | ao\ny, . Not Topological Equivaleny, *\n/ @ @ \\\nFarl 1\nq @ @ 4 Deep Neural Nets\n\nN 7\n(a,b) Large ~~ \u2014-\n\nJ a 2 (X, 1D\n|_|\n\nAdversarial direction | \u2014 ~\no aa ,\n\\ - 7 |la\u2014b |llthe same 6\n>\u00bb? 7 7 7\n7 @ @ jo\n\n:\n7@\u00aee@ -\n\u00a2 :\n\nx f&\n\n-\n(X41, dy) and (X2, dz) are not topologically equivalent). According to Theorem (3.4), in this case, the\nDNN is vulnerable to adversarial examples. The two sample points a and b are close with regards to\n(w.rt.) a norm || - || in_X. They are also close w.r.t. dz in (X2, d2) space and close w.r.t. d5 in (X, d5)\nspace. But they are far from each other in the space of (X, d/,) and in the space of (X1, d1). In other\nwords, while d2(a, b), d(a, b) and ||a \u2014 6|| are small, d;(a, 6) and d},(a, 6) are large. Clearly, DNN\ncan be easily evaded by adding a small perturbation || a \u2014 b || on sample a or sample b. NOTE: it is\nnormally difficult to get the analytic form of (X2, dz) for most applications. Most previous studies\n(reviewed in Section|2.2) assume (X2, d2) equals to (X, || - ||), where || - || is a norm function.\n\nFigure 3: This figure shows one situation that (X, d\u2018,) is not a finer topology than (X, d5) (therefore\ntasks through performance against adversarial samples (details in Section\neffects of these strategies vary from task to task, however, they all improve t\nperformance in the adversarial setting.\n\n. The hardening\nbase DNN models\nBy modifying unnecessary features: As shown by Table[3} unnecessary features ruin the strong\nrobustness of learning-based classifiers. A simple way to remove the unrelated features is to identify\nwhich feature is unnecessary. In ) the authors compare the difference between\ngi(x\") and g(x) from DNN. They hypothesize that those learned DNN feature dimensions (in Xj)\nchanging rapidly are utilized by an adversary, and thus can be removed to improve the robustness\nof DNN model. Another efficient method is to substitute different values of features into several\nequivalent classes. By this way, the adversarial perturbation in the unnecessary feature dimensions\ncan be squeezed by projecting into the same equivalent class. A recent study qi & Vorobeychik\nexplored a similar strategy by using equivalent-feature-group to replace each word feature in a\ngroup, in order to improve the robustness of spam-email classifiers against evasion attacks.\nAdversarial examples are maliciously created inputs that lead a learning-based classifier to produce\nincorrect output labels. An adversarial example is often generated by adding small perturbation:\nthat appear unmodified to human observers. Recent studies that tried to analyze classifiers unde\nadversarial examples are mostly empirical and provide little understanding of why. To fill the gap, we\npropose a theoretical framework for analyzing machine learning classifiers, especially deep neura\nnetworks (DNN) against such examples. This paper is divided into three parts. The first sectior\nprovides a revised definition of adversarial examples by taking into account of the oracle of the task\nThe second section defines strong-robustness and provides the principled understanding of wha\nmakes a classifier strong-robust. The third section examines practical and theoretically groundec\ndirections for understanding and hardening DNN models against adversarial examples. Future step:\nwill include an empirical comparison to analvze recent literature using our theorems.\n4 ule DTOAGET secure MachimMe iCarMine Nia, TesealCners also Make attempts Or Naraenins learnins\n\nsystems. For instance: (1) and (Biggio et al.||2008) propose a method tc\n\nintroduce some randomness in the selection of classification boundaries; (2) A few recent studies\n(Xiao et al.|/2015} {Zhang et al_| al.|/2015) consider the impact of using reduced feature sets on classifiers\nunder adversarial attacks. (Xiao et al.||2015) proposes an adversary-aware feature selection model that\ncan improve a classifier\u2019s robustness against adversarial attacks by incorporating specific assumption:\nabout the adversary\u2019s data manipulation strategy. (3) Another line of works, named as adversaria\ntraining (Goodfellow et al.||2014), designs a new loss function for training neural networks, which is \u00ab\nlinear interpolation of the loss function of the original sample and the loss function of the adversaria\nexample generated by the original sample. A scalable version of adversarial training (Kurakin et al.\n\nwas recently proposed. By applying several tricks, the author can apply the adversarial training\nto deeper network trained by the imagenet dataset. (4) Multiple studies model adversarial scenarios\nwith formal frameworks representing the interaction between the classifier and the adversary. Related\nefforts include perfect information assumptions (Dalvi et al.|2004), assuming a polynomial number o:\nmembership queries (Lowd & Meek\\|2005), formalizing the attack process as a two-person sequentia\nStackelberg game (Briickner & Scheffer}|2011}|/Liu & Chawla 2010), a min-max strategy (training\n\na classifier with best performance under the worst perturbation) (Dekel et al.||2010}|Globerson &\n\nthe solution of computing the best defender strategy against the learned adversary \u2018behavior. Tt has \u00a2\nInvestigating the behavior of machine |\n\nearning systems in adversarial environments is an emerging\n\ntopic (Huang et al.||2011}|Barreno et al. {2006} |2010}|Globerson & Roweis| 2006} [Biggio et al.|\n\ninto three types: (1) Poisoning attacks\n\n2012} Mei & Zhu} |2015a) have consi\n\nautoregressive models and topic mode!\ngoal is to create inputs that are misclass\netal.\nBiggio eval.\n\nan opportunity to influence the trainin\ntrained classifier like DNN, SVM or ran\n2014) is another important category re\nstudies have proposed various strategies\nLi & Zhou\n\ntraining data. Multiple recent papers\n\n2016b\n\n2015} [Rajkumar & Agarwall [2012|\n\n2013} Kantchelian et al.]/2015} [Zhang et al.|/2015). Recent studies can be roughly categorized\n\nin which specially crafted attack points are injected into the\n\n\u00a32016) Met & Zhu 201Sb}[Bigao etal 2014)\n\nered the problem of an adversary being able to pollute the\n\ntraining data with the goal of influencing learning systems including support vector machines (SVM),\n\nIs. (2) Evasion attacks are attacks in which the adversary\u2019s\nified by a deployed target classifier. Related studies (Szegedy|\n2014\n(2016) assume the adversary does not have\nig data, but instead finds \u201cadversarial examples\u201d to evade a\ndom forest. (3) Privacy-aware machine learning\nevant to data security in machine learning systems. Recent\nPor 2015) to preserve the\n\nprivacy of data such as differential privacy. This paper focuses on evasion attacks that are mostly\n\nused to attacking classifiers that try to\n\nistinguish malicious behaviors from benign behaviors. Here\n\nwe extend it to a broader meaning \u2014 adversarial manipulation of test samples. Evasion attacks may be\n\nencountered during system deployment\n\nof machine learning methods in adversarial settings.\nsimilar conclusion as ours ( Section[3) that the extreme cases that the defender doesn\u2019t work only ha\n\nzero probability (Sinha et al.}|2016).\n/\u2014\n\nMachine\nclassifier\n\nfi\n\nthe oracle\n\nfh\n\nTest-Sample Case (a)\n\nAccurate Prediction\n\nAG) = AG)! dx\") <e\u20ac\n\nerin\n\nclassifier\n\nfh\n\nthe oracle\n\nfh\n\nTest-Sample Case (b)\n\nNot accurate\n\nAi@) = AG\u2019) dex) <\u20ac\n\nerin\n\nclassifier\n\nfh\n\nthe oracle\n\n\\\"\n\nPRG) # AD] d2@,x') < \u20ac) =0\n\nx)\n:\n:\nLA,\na\na xX,\n\u00ae\n\u00ae\n\nTest-Sample Case (c)\n\nAiG) # iD] dow, x') <e\u20ac\nFigure 4: An example figure illustrating Table 3|Case (II) when f; is strong-robust. We assume\nc, and cy as linear classification functions. We show one case of X,; = X\u00bb = R? and f,, fo are\ncontinuous a.e.. In terms of classification, f; (green boundary line) is not accurate according to f,\n(red boundary line). All pairs of test samples (x, x\u2019) can be categorized into the three cases shown in\nthis figure. Test-case (a): f; and f2 assign the same classification label (yellow circle) on x and x\u2019. a\nand x\u2019 are predicted as the same class by both. Test-case (b): f1 assigns the class of \u201cblue square\u201d on\nboth x and x\u2019. f2 assigns the class of \u201cyellow circle\u201d on both x and x\u2019. Test-case (c): f2 assigns the\nclass of \u201cyellow circle\u201d on both x and x\u2019. However, 1 assigns the class of \u201cblue square\u201d on x and\nassigns a different class of \u201cyellow circle\u201d on x\u2019. This case has been explained in Section11]\n\u2018Machine classifier f;\n\nTest-Sample Case (a)\n\nthe oracle fz\n\n> X,\n\nAccurate Prediction\n\nAiG) = Ai@)| doe x') <\u20ac\n\n\u2018Machine classifier f;\n\nTest-Sample Case (b)\n\nthe oracle fz\n\n> X,\n\nNot accurate\nAiG) = A) dou x') <\u20ac\n\n\u2018Machine classifier f;\n\nTest-Sample Case (c)\n\nthe oracle fz\n\n> X,\nFigure 5: An example figure illustrating Table[3]Case (IV) when f; is strong-robust. We assume c;\nand cz as linear classification functions. We show one case of 1 = nj < ng = 2, X, C Xe and fy,\nfz are continuous a.e.. In terms of classification, f; (green boundary line) is not accurate according to\nfz (red boundary line). All pairs of test samples (x, x\u2019) can be categorized into the three cases shown\nin this figure. Test-case (a): f; and f assign the same classification label (yellow circle) on x and 2\u2019\nx and x\u2019 are predicted as the same class by both. Test-case (b): f1 assigns the class of \u201cyellow circle\u2019\non both x and x\u2019. f2 assigns the class of \u201cblue square\u201d on both x and x\u2019. Test-case (c): f2 assigns the\nclass of \u201cyellow circle\u201d on both x and x\u2019. However, f; assigns the class of \u201cblue square\u201d on x and\nassigns a different class of \u201cyellow circle\u201d on x\u2019. This case can be explained in Section|11]\nFor the purpose of fooling\u201d a classifier, naturally, the attacker wants to control the size of the\nperturbation A(x, x\u2019) to ensure the perturbed sample 2\u2019 still stays close enough to the original sample\nzx to satisfy the intended \"fooling\" purpose. For example, in the image classification case, Eq. (2.1\ncan use the gradient information to find a A(x, x\u2019) that makes human annotators still recognize x7 a\nalmost the same as \u00ab, though the classifier will predict x\u2019 into a creat class. In another (apis foun\nwith more obvious security implications about PDF malware (Xu et al.| (Xuetal| (2016), 2\u2019 in Eq. (2.1) (2.1) is founc\nby genetic programming. A modified PDF file from a malicious PDF seed po still be recognized a:\nmalicious by an oracle machine (i.e., a virtual machine decides if a PDF file is malicious or not by\n\nactually running it), but are classified as benign by state-of-art machine learning classifiers (Xu et al\n\n2016).\nSubject to: f(a) 4 f(x\u2019)\nEq. tries to find the x\u2019 by minimizing A(x, x\u2019) under some constraints. Eq. (2\ngeneral formulation than Eq. and can summarize most relevant studies. For example, in\n\u201cadversarial examples\u201d are those generated PDFs that can fool PDFRate (a learning-based\nclassifier for detecting malicious PDFs) to classify them as benign. The distances of these variant\n\nPDFs to the seed PDF are not necessarily minimal. For such cases, Eq. still fits, while Eq.\ndoes not.\nBesides, in the field of computer security, machine learning has been popular in classifying the\nmalicious (y = 1) behavior versus benign behavior (y = \u20141). For such a context, two differen\ndefinitions of adversarial examples exist in the literature:\nFor instance, (Biggio et al.|/2013) uses a formula as follows:\nTo fool classifiers at test time, several approaches have been implemented to generate \u201cadversarial\nperturbations\u201d by solving Eq. (2.2). According to Eq. , an adversarial example should be able\nto change the classification result f1(x), which is a discrete value. To solve Eq. (2.2), we need tc\ntransform the constraint f(x) # f(x\u2019) into an optimizable formulation. Then we can easily use the\nLagrangian multiplier to solve Eq. . All the previous studies define a loss function Loss(-, -) te\nquantify the constraint f(x) # fi(27). This loss function can be the same with the training loss, o1\nit can be chosen differently, such as hinge loss or cross entropy loss.\nWe summarize four common attacking studies as follows:\nGradient ascent method (Biggio et al. Machine learning has been popular in classifying\nmalicious (y = 1) versus benign (y = \u20141) in computer security tasks. For such contexts, a simple\n\nway to solve Eq. is through gradient ascent. To minimize the size of the perturbation and\nmaximize the adversarial effect, the perturbation should follow the gradient direction (i.e., the\ndirection providing the largest increase of function value, here from y = \u20141 to 1). Therefore, the\nperturbation r in each iteration is calculated as:\n\n\u2014 +r ep apy \u201d a\nwt. A(a, a\") < dinax\nfil) > 0\ns.t. f(a\u2019 ese\nfil) >\nHere dmax is a small positive constant. These definitions of \u201cadversarial examples\u201d are special cases\n\nof Eq. and Eq. (2.1p.\nBox L-BFGS adversary (Szegedy et al.}/2013) This study views the adversarial problem as a\n\nconstrained optimization problem, 1.e., find a minimum perturbation in the restricted sample space.\nThe perturbation is obtained by using Box-constrained L-BFGS to solve the following equation:\nFast gradient sign method (Goodfellow et al., The fast gradient sign method proposed\nby (Goodfellow et al. (2014) [2014) views dz as the \u00a3 nt In this case, a natural choice is to make the\n\nattack strength at every feature dimension the same. The perturbation is obtained by solving the\nfollowing equation:\neee, a, Aes\nHere the loss function is the function used to train the neural network. A recent paper (Kurakin et al.\nshows that adversarial examples generated by fast gradient sign method are misclassified even\nafter these images have been recaptured by cameras.\nJacobian-based saliency map approach (Papernot et al.| 2015a) (Papernot et al.| 2015a) pro\n\nposed the Jacobian-based saliency map approach (JSMA) to search for adversarial samples whil\nlimiting the number of pixel to modify in the image. As a targeted attack, JSMA iteratively per\nturbs pixels in an input that have large adversarial saliency scores. The adversarial saliency may\nis calculated from the Jacobian (gradient) matrix Vx f1(x) of the DNN model at the current inpu\nx. The (i,j) component in Jacobian matrix Vx fi(x) describes the derivative of output class ;\nwith respect to feature pixel i. For each pixel i, its adversarial saliency score is calculated to reflec\nhow this pixel will increase the output score of class 7 versus changing the score of other possibl.\noutput classes. The process is repeated until misclassification in the target class is achieved or the\nmaximum number of perturbed pixels has been reached. Essentially, JSMA optimizes Equation[2._\nby measuring perturbation A(x, x\u2019) through the @)-norm.\nThough difficult, we want to argue that it is possible to theoretically model \"oracles\" for some\nstate-of-the-art applications. For instance, as illustrated by the seminal cognitive neuroscience\n\npaper \"untangling invariant object recognition\" and its follow-up study\n(DiCarlo et al.||/2012), the authors show that one can view the information processing of visual object\nrecognition by hu\n\nman brains as the process of finding operations that progressively transform retinal\nrepresentations into a new form of representation (X2 in this paper), followed by the application of\nrelatively simple decision functions (e.g., linear classifiers (Duda et al.||2012)). More specifically,\nin human and other primates, such visual recognition takes place along the ventral visual stream,\nand this stream is considered to be a progressive series of visual re-representations, from V1 to\nV2 to V4 to IT x (DiCarlo & Cox] (2007). Multiple relevant studies (e.g.,\nhave argued that this viewpoint of representation learning\nplus simple decision function is more productive than hypothesizing that brains directly learn very\ncomplex decision functions (highly non-linear) that operate on the retinal image representation. This\nis because the experimental evidence suggests that this view takes the problem apart in a way that is\nconsistent with the architecture and response properties of the ventral visual stream. Besides, simple\ndecision functions can be easily implemented in a single step of biologically plausible neuronal\nprocessing (i.e., a thresholded sum over weighted synapses).\nAs another example, the authors of used genetic programming to find \u201cadversarial\nexamples\u201d (by solving Eq. (2.2)) for a learning-based malicious-PDF classifier. This search needs an\noracle to determine if a variant x\u2019 preserves the malicious behavior of a seed PDF xz (ie., f(x) =\nf2(2\")). The authors of (Xu et al.||2016) therefore used the Cuckoo sandbox (a malware analysis\nsystem through actual execution) to run a variant PDF sample in a virtual machine installed with a\nPDF reader and reported the behavior of the sample including network APIs calls. By comparing the\nHere p is the total number of features, c is a term added for the Lagrange multiplier. (for an image\nclassification task, it is 3 times the total number of pixels of an RGB image) / is a target label, which\nis different from the original label. The constraint \u00ab + r \u20ac [0, 1]? means that the adversarial example\nis still in the range of sample space.\nlin(c x do(a,x +r) \u2014 Loss(fi(x +r), fi(x))),2 +r \u20ac [0, 1}\nbehavioral signature of the original PDF malware and the manipulated variant, this oracle successfull\ndetermines if the malicious behavior is preserved from x to x\u2019. One may argue that \"since Cuckox\nsandbox works well for PDF-malware identification, why a machine-learning based detection systen\nis even necessary?\". This is because Cuckoo sandbox is computationally expensive and runs slov\nFor many security-sensitive applications about machines, oracles f2 do exist, but machine-learnins\nclassifiers f; are used popularly due to speed or efficiency.\n(t is difficult to decompose an arbitrary f; into g; oc. However, since in our context, f is a machin\u00ab\nlearning classifier, we can enumerate many possible g; functions to cover classic machine learnins\nclassifiers.\nMost previous studies (Table[2) have made an important and implicit assumption about f; and f2: f\nis almost everywhere (a.e.) continuous. i \u20ac {1,2}.\nDefinition 9.1. Suppose f; is the classification function. f; is continuous a.e., i \u20ac {1,2}, if\nVa \u20ac X ae., 46; > 0, such that Vx! \u20ac X.,d;(9;(x), 9;(x')) < 6;, f(x) = fi (a\u2019).\nIllustrated in Figure[I] d; is the metric function (details in Section[3) f; uses to measure the similarity\namong samples in the space X;. For notation simplicity, we use the term \u201ccontinuous a.e.\u201d for\n\u201ccontinuous almost everywhere [hina the rest of the paper. The above definition is a special case of\n\nalmost everywhere continuity defined in (Folland}|2013) (see Definition .2) in Section[9.1), since\n\nwe decompose f; in a certain way (see Figure[1). The a.e. continuity has a few indioatons Tike:\nWw. Jf ew m/f (.\\ NID (2. (2) 2. (INV 2b ec \\y Nn (1 1\\\nfi is continuous a.e.: Almost all popular machine learning classifiers satisfy the a.e. continuity\nassumption. For instance, a deep neural network is certainly sme\u2019) a.e.. Similarly to the results\nshown by (Szegedy et al.||2013), DNNs satisfy that |f\\(x) \u2014 fi(2\")| < W || \u00ab\u2014 2\u2019 |\\2 where\nW < [[W; and W; > [[(wi, loo. Here i = fe 2,... 2) representing i-th linear layer in NN\nTherefore, Ve > 0, let 6 = c/W. Then | f(a) \u2014 f1(2\u2019)| < \u20ac when d;(x, 2\u2019) =|| 2 \u2014 v les 6. This\nshows that a deep neural network is almost everywhere continuous when d,(-) = || -\nFor the rare cases when /| is not continuous a.e., see next Section\nthat matter for analyzing adversarial perturbations.\n\ndiscussing \"boundary points\u2019\nThe a.e. continuity has a few indications,\ne X is nota finite space; and Vx, x\u2019 \u20ac X, P(fi(x) = fi(2\")|di(gi(x), gi(a\u2019)) < 5:) =1\ne It does not mean the function f; is continuous in every point in its feature space X;\nVarious feature selection methods are potential gy.\n\nFor DNN, g; includes all the layers from input layer to the layer before the classification layer.\nIn SVM, X,, d, is decided by the chosen reproducing Hilbert kernel space.\n\nRegularization is another popular implicit feature extraction method. For example, \u00a2; regularizatior\ncan automatically do the feature extraction by pushing some parameters to be 0.\nf2 is assumed continuous a.e. previously: Most previous studies find \"adversarial examples\" by\nsolving Eq. , instead of Eq. . This made an implicit assumption that if the adversarial\nexample x\u2019 is similar to the seed sample x, they belong to the same class according to f2. This\nassumption essentially is: f> is almost everywhere (a.e.) continuous.\nIn Section we show that if f; is not continuous a.e., it is not robust to any types of noise.\nConsidering the generalization assumption of machine learning, machine learning classifiers should\nsatisfy the continuity a.e. assumption. Section|9.2|provides two examples of how popular machine\nlearning classifiers satisfy this assumption.\ne Ifa probability distribution admits a density, then the probability of every one-point set {a} is zero:\nthe same holds for finite and countable sets and the same conclusion holds for zero measure sets\nfor instance, straight lines or circle in R\u201d.\n\ne The a.e. continuity follows the same property as density function: the probability of picking\none-point set {x} from the whole feature space is zero; the same holds for zero measure sets. This\nmeans: the probability of picking the discontinuous points (e.g., points on the decision boundary)\nis zero, because they are null sets.\n\ne Most machine learning methods focus on X = R? or space equivalent to R? (e.g., [0, 1]?) (see\nAppendix: Section . Most machine learning methods assume f; is continuous a.e. (see\nAppendix: Section\nDefinition 9.2. Suppose (X,F,P) is a probability space(for general definition, (X,%, 1) is \u00a2\nmeasure space), where X is the sample space, a o-algebra F is a collection of all the events and P i.\na probability measure defined in X and F. A property holds \u201calmost everywhere\u201d (a.e.) in X if anc\nonly if the probability measure of the set for which the property holds equals 1.\nLemma 9.3. [f the a.e. continuity assumption doesn\u2019t hold, there exists a non-zero measure set D.\nVa \u20ac D, sa\u2019\nst. fila) A f(x\u2019)\n\ndy (x, 2\") < 6\nProof. Without it, for any test sample x, you can easily find a very similar sample 2\u2019 (i.e. for any\n\nsmall 5,, dy(a, a\") < 61) such that | f;(a) \u2014 f1(x\u2019)| > \u00a2. In classification problems, this means that\n\nfi(x) # fi(a\u2019)(ie. there exist very similar pair of two samples x and z\u2019 that have different labels for\nmost z \u20ac X)).\nAlmost all popular machine learning classifiers satisfy the a.e. continuity assumption. For example\n10 APPENDIX: USING METRIC SPACE AND PSEUDO METRIC SPACES TC\nUNDERSTAND CLASSIFIERS\u2019 ROBUSTNESS AGAINST ADVERSARIAL\nEXAMPLES\nThis subsection briefly introduces the concept of metric space and topological equivalence. A metric\non a set/space X is a function d : X x X \u2014+ [0, co] satisfying four properties: (1) non-negativity, (2\nidentity of indiscernibles, (3) symmetry and (4) triangle inequality. In machine learning, for example\nthe most widely used metric is Euclidean distance. Kernel based methods, such as SVM, kerne\nshows that f; is not robust to a random noise if we don\u2019t assume f) is continuot\ne Logistic regression for text categorization with a bag of word representation.\nA classifier on a multivariate feature representation in which each feature representing (modified)\ncounts of a word is naturally a.e. continuous. Since {2'|d) (x, 2\u2019) < 61,2 4 x} = 0 when 6, is\nsmall and x, x\u2019 are mostly sparse vectors. Logistic regression with a bag of word representation is\na continuous a.e. predictor.\n\ne Support Vector Machine with continuous feature representation.\nSuppose we define (Xj, d1) by the d?(x,x') = k(a,x) + k(a\u2019, 2\") \u2014 2k(a, 2\u2019). Then support\nvector machine is a linear classifier on (X1,d1). Thus, the SVM prediction function is continuous\na.e. with dy.\nMost machine learning methods focus on the R\u201d space or the space equivalent to R\u201d (e.g., [0, 1]\u201d).\nFor example, the sample space of image classification task intuitively is 255\u201d, where p is the number\nof features (e.g., 3 x 224 x 224). However, people mostly rescale the raw image data samples into\nX = 0,1)\u201d. Therefore, the sample space X for f; for this case is [0, 1]?.\nregression and Gaussian process, consider samples in a Reproducing kernel Hilbert space (RKHS)\nThe metric in a RKHS is naturally defined as: d?(x,y) = K(x,x) + K(y,y) \u2014 2K(a,y), in which\nK(-,-) is a kernel function.\nNow we present an important definition, namely that of \u201ctopological equivalence\u201d, that can represe\n1 special relationship between two metric spaces.\nA function or mapping h(-) from one topological space to another is continuous if the inverse image\nof any open set is open. If this continuous function is one-to-one and onto, and the inverse of the\nfunction is also continuous, then the function is called a homeomorphism and the domain of the\nfunction, in our case (X 1, d1), is said to be homeomorphic to the output range, e.g., here (X2,d2).\nIn other words, metric space (X,.d,) is topologically equivalent to the metric space (X\u00bb>. do).\nWe can state this definition as the following equation:\nh(x) = x, h(a) = x9\n0.2 PSEUDOMETRIC SPACES AND FINER TOPOLOGY AMONG PSEUDOMETRIC SPACE:\nWe have briefly reviewed the concept of metric space in Section[I0.Tand proposed the related Thec\n2) in Section]3.3} This is partly because the concept of metric space has been widely usec\nin many machine learning models, such as metric learning Poe tbacie (Xing et al.|{2003). Theorem (3 B.2) anc\nrelated analysis indicate that feature spaces X, and X\u00bb (See Figure[I) are key determining 8. fo\ndeciding learning model\u2019s strong-robustness.\nHowever, it is difficult to get the analytic form of X2 in most applications (e.g., when an oracle f-\na human annotator). In fact, most previous studies (reviewed in Section/2.2} assume (X9, dy) equal\nto (X, || - ||), where || - || is a norm function. Therefore, we want to extend our analysis and resul\nfrom the implicit feature space X9 to the original feature space X.\nWhen we extend the analysis to the original space X, it is important to point out that the distanc\nfunction measuring sample similarity for a learned predictor f; in the original space X may not be\nmetric. The distance function in the original feature space X for oracle f2 may not be a metric a\nwell. This is because the distance between two different samples in the original space X may equ\nto 0. Because two different samples may be projected into the same point in X1 or X2. For exampl\na change in one pixel of background in an image does not affect the prediction of f; or fz since th\ngi and gz have already eliminated that (irrelevant) feature. This property contradicts the identity c\nindiscernibles assumption for a metric function. Therefore we need a more general concept of th\ndistance function for performing theoretical analysis in the original space X. By using the concept c\nPseudometric Spacd!*| we derive another important theorem about strong-robustness.\nPseudometric: If a distance function d\u2019 : X x X \u2014 [0,00] has the following three properties: (1)\nnon-negativity, (2) symmetry and (3) triangle inequality, we call d is a pseudometric or generalized\nmetric. The space (X, d\u2019) is a pseudometric space or generalized metric space. It is worth to point out\nthat the generalized metric space is a special case of topological space and metric space is a special\ncase of pseudometric space.\nSince dj is a metric in Xj, di, fulfills the qd) non-negativit (2) symmetry and (3) triangle inequality\nproperties. However, d', may not satisfy the identity of indiscernible property (i.e., making it not a\nWhy Pseudometric Space: As shown in Figure[I] we can decompose a common machine learning\nclassifier f; = ci 0 gi, where gi : X \u2014 Xj represents the feature extraction and cy : X; > Y\nperforms the operation of classification. Assume there exists a pseudometric d{(-,-) on X anda\nmetric d,(-,+) defined on xf] so that Va, a\u2019 \u20ac X,\n\nope aS ry yn rr ee ANA\nmetric). For example, suppose gj only selects the first three features from X. Two samples x and 2\u2019\nhave the same value in the first me features but different values in the rest features. Clearly, x 4 x\u2019\nbut di (x, x\u2019) = dy (gi (x), 9: (a\u201d)) = 0. This shows that d/(-,-) is a pseudometric but not a metric in\n\nXx. Similarly, a Re een d\u2019, for the oracle can be defined as follow:\nfon ml) \u2014 Dolan fm\\ ni feml\\) 111 2)\nTo analyze the strong robustness problem in the original feature space X, we assume it to be a\ngeneralized metric (pseudometric) space (X,d,) for f; and a generalized metric (pseudometric)\nspace (X,d}) for f2. Now we can analyze f and f2 on the same feature space X but relate to two\ndifferent pseudometrics. This makes it possible to define a sufficient and necessary condition for\ndetermining the strong robustness of f; against adversarial perturbation.\nDefinition 10.2. A topology T is a collection of open sets in a space X.\nA topology 7 is generated by a collection of open balls {B(x,5,)} where x \u20ac X and B(x, 6,) =\n{z|d(x, z) < 61}. The collection contains { B(\u00ab, 6,)}, the infinite/finite number of the union of balls,\nand the finite number of intersection of them.\nDefinition 10.3. Suppose 7, and T2 are two topologies in space X. If Tz \u00a9 7, the topology T2 is\ncalled a coarser (weaker or smaller) topology than the topology 7, and 7 is called a finer (stronger\nor larger) topology than 72.\nIn this section, we provide the proofs for Theorem (B.2), Corollary (4-2), Theorem 34), and Corol-\nlary (4-1). We first prove Theorem (3.4) and Corollary (4-1). Since \u201ctopological equivalence\u201d is a\n\nstronger condition than \u201cfiner topology\u201d, Theorem (3.2) and Corollary (4.2) are straightforward.\ne First, we want to prove that given d2 > 0, 46, > 0 such that if d(x, x\u2019) < dg, then di (x, x\u2019) < 54\nConsider a pair of samples (x, x\u2019) and d(x, 2\u2019) < 69. 2,2\u2019 \u20ac Bo(x, 02). Of course, Bo(x, 52) \u20ac\ntT). Suppose the (X.d!) is a finer topology than (X.d4). Then Bo(x.69) \u20ac 7. You can\nBefore introducing this condition, we need to briefly introduce the definition of topology and\nfiner/coarser topology here:\nSE ee\n\nee we eee ae\n\ne First, we want to prove that given 52 > 0, 46, > 0 such that. if dh(x, 2\") < da, then d{(x, 2\u2019) < 6).\nConsider a pair of samples (x, x\u2019) and d(x, 2\") < 62. 2,2\u2019 \u20ac Bo(x, 62). Of course, Bo(x, 52) \u20ac\nT). Suppose the (X,d{) is a finer topology than (X,d5). Then Bo(x,d2) \u20ac 7. You can\nfind By (xo, 51/2) \u20ac 7 such that Bo(a, do) C van 5,/2), ), where Bo(x, 52) is the closure of\nB(x, 52). Therefore dj, (x, 2\") < 61.\n\nBased on a.e. continuity assumption of f1, since d(x, 2\u2019) < 6, fi(a) = fi(2z\u2019) ae. . This means\nthat P( f(a) = fi(x\u2019)|d2(g2(x), go(a\u2019)) < 62) = 1, which is our definition of strong-robustness.\n\ne Next, we want to show that if {1 is strong-robust, then 7; is a finer topology than 72.\n\nSuppose f, is strong-robust, we need to prove that Vb2 > 0, 45, > 0 such that if d}(x, x\u2019) < do,\nthen dj (x,x\") < 6.\n\nAssume 7 is not a finer topology than 72. This means there exists a B2(x, 52) such that Bo (x, 62) \u00a2\n7. Therefore V5, > 0, there exists x\u2019 \u20ac B(x, 62) such that dh(x, x\u2019) < dy and dj (x, x\u2019) > dy.\nBased on a.e. continuity assumption of f1, d(x, 2\u2019) > 6; indicates that f\\(2) # f1(x\u2019). This\ncontradicts the strong-robust assumption. Thus. 7; is a finer topology than 7.\nfind By (xo, 51/2) \u20ac 7 such that Bo(x,62) C Bi(x0, 61/2), where Bo(x, 52) is the closure of\nB(x, 52). Therefore dj, (x, 2\") < 61.\n\nBased on a.e. continuity assumption of f,, since di (x, x\u2019) < 61, fi(v) = f(z\u2019) ae. . This means\nthat P(fi(x) = fi(a\u2019)|fo(@) = fo(x), do(go(x), go(2\")) < 62) = 1, which is our definition of\nstrong-robustness.\n\nNext, we want to show that if f; is strong-robust, then 7; is a finer topology than 72.\n\nSuppose f, is strong-robust, we need to prove that V52 > 0, 45; > 0 such that if d}(x, 2\u2019) < 52,\nthen dj (x,x\") < 61.\n\nAssume 7 is not a finer topology than 72. This means there exists a B2(x, 52) such that Bz (x, 62) \u00a2\n7. Therefore V5, > 0, there exists x\u2019 \u20ac Bo(x, 52) such that d5(x, x\u2019) < 52 and di (a, 2\u2019) > 61.\nBased on a.e. continuity assumption of f1, d(x, 2\u2019) > 6; indicates that f\\(2) # f1(x\u2019). This\ncontradicts the strong-robust assumption. Thus, 7, is a finer topology than 79.\n=1-P(file) 4 file\")| fala) = fale\u2019). dh(a,2!) <b)\n=1-P(fila) \u00a2 fale\u2019) |fole) = fale\u2019), di(a,a\") < di,\n\ndi,(x,2') < 62)\n>1l-7\nPUA) = fir(@ )|fo(@) = fala\u2019), do(ga(a), ga(a\u2019)) < 02)\n=1-P(fi(z) 4 file\u2019) | fo(@) = fo(2\u2019),\nd2(g2(x), g2(a\")) < 52)\n=1-P(fi(x) \u00a5 fil2\u2019)|folx) = fala\u2019),\ndi (91 (x), 91(x')) < 51, d2(g2(x), g2(2\")) < 52)\n>1-7n\nProof. Suppose n, > nz and X2 C Xj. (X,d) isa finer topology than (X,d/,). Therefore (X, d{,)\nis not a finer topology than (X, d\u2018,), which indicates that f is not strong-robust against adversarial\nexamples. O\nAll pairs of test samples (v, x\u2019) can be categorized into the three cases shown in both figures\n\u201clearly from the two figures, c, does not determine the strong-robustness of f;.\nIn real-world applications, such attacks can be, for example, adding words with a very tiny font siz\nin a spam E-mail, that is invisible to a human annotator. When a learning-based classifier tries t\nutilize such extra words (unnecessary for human), it can lead to many easily generated adversaria\nemails.\nAs another example, one previous study (X' ) shows that a genetic-programming based\nadversarial example strategy can always evade two state-of-art learning-based PDF-malware classifiers\n(with \"100%\" evasion rates). The reason behind such good evasion rates is the Condition (4.1. Both\nstate-of-art PDF-malware classifiers have used many superficial features (e.g., a feature representing\n\"is there a long comment section\") that are not relevant to \"the malicious property\" of a PDF sample\nat all !\n10.4.3. WHEN f; CONTINUOUS A.E., EITHER STRONG-ROBUST OR NOT ROBUST AT ALL A\nWhen f/f, is not continuous a.e., the analysis of adversarial examples needs to consider \"boundary\npoints\" of f; with certain properties. This section tries to clarify the definition and related scope.\nxe X,a' EX}\nFigure[5]uses an example figure to illustrate Table[3]Case (IV) when /; is strong-robust. We show\none case of 1 = ny < ng = 2,Xy C X2 and fy, fo are continuous a.e.. In terms of classification, f1\n(green boundary line) is not accurate according to f> (red boundary line).\ne Test-case (a) is when x and x\u2019 are predicted as the same class by both. /; gets correct predictions\naccording to f2. There exist no adversarial examples.\n\ne Test-case (b) is when x and 2\u2019 are predicted as the same class by both. But f; gets incorrect\npredictions according to f2. There exist no adversarial examples.\n\ne Test-case (c) shows when fi(x) 4 f(x\u2019), do(x, 2\u2019) < 6 and f(x) = f(z\u2019). This case is\nexplained in Section[I] Essentially, this is about \u201cBoundary based adversarial examples\u201d and\ncan only attack points whose distance to the boundary of f; is smaller than 62 (f1(x) # f(z\u2019),\ndo(x.x') < 65 and fo(x) = fo(x\u2019)). When f; is continuous a.e., the probability of this set is 0.\nTable [3]indicates that training a strong-robust and accurate classifier in practice is extremely difficult.\nFor instance, Figure [2|shows only one extra irrelevant feature, which does not hurt accuracy, makes\nthe classifier not robust to adversarial perturbation at all (i.e., for samples a.e. in X, easy to find its\nadversarial examples.).\nWhen f; is continuous a.e., P(fi(\u00ab) = fi(2\u2019)|fo(@) = f(a\u2019), do(go(x), g2(x\u2019)) < 52) equals to\neither 1 or 0. This means / is either strong-robust or not ane ae AN at all a.e.. One case\nwith this probability as 0 is illustrated by Figure[2] Case (III) and Case (IV) from Table have this\nprobability equaling to 1.\nOur definition of the boundary points describes such points as pairs of samples that are across the\nclassification boundary. This format of definition makes the following analysis (notation-wise) easy\nand concise.\nFigure 6: An example showing boundary points of f; and boundary points of f2. We assume f.\nand f2 are continuous a.e.. We assume c; and cp as linear classification functions. The first twe\ncolumns showing boundary points of f, that are not considered in this paper. The third columr\ndescribes \u201cBoundary based adversarial attacks\u201d that can only attack seed samples whose distance tc\nthe boundary of f; is smaller than \u20ac. Essentially this attack is about those boundary points of f; tha\nare treated as similar and belong to the same class by f2.\nThis lemma shows that a case with probability of boundary points larger than 0 is exactly the situatio\nwhen f; being not continuous a.e..\nThe third column of Figure|6|describes \u201cBoundary based adversarial examples\u201d that can only attack\nseed samples whose distance to the boundary of f; is smaller than 62. Essentially this attack is about\nthose boundary points of f; that are treated as similar and belong to the same class by f2. That is\nNot consider in the\nstrong-robustness\n\nCase (a) Case (b) Case (c)\nMachine xX, , 7G xX\u2019\nora a al 7/7\nclassifier 7 , /\n, 4 \u00a9 aly\nfi an 7 i \u00ae\nra > ) 7 .\n\nNot consider in the\nstrong-robustness\n\nBoundary points of f;\n\n/ X,\n\nLd\n\nlo\n\nthe oracle i,\n7, \u00ae\n\n>\n\nBoundary points of f, | Boundary points of f,\n\nand f;\n4 0s\n_ 2\nfal,\u2019\n7 \u00ae\nL\n\nBoundary-based attack\n\n\u2019X,\n\ncA\n\nO Classi\nO Class 2\n\na \u00ae\nIn addition, we want to point out that all boundary pairs of f2 (satisfying fo(z) # fo(x\u2019) and\nd2(g2(x), g2(x\u2019)) < 62) are not considered in our analysis of adversarial examples. Figure\nillustrates three types of boundary points, using the first two columns showing boundary points of ;\ndi(gi(x), g1(a\")) < 61)\nand in Theorem\ncontinuous a.e.\n\nThe value of this probability is critical for our analysis in Theorem (\nwe want to emphasize that most machine learning methods assume f; is\n\u201cboundary based adversarial attacks\u201d are not crucial.\n\nAgain,\nnd therefore\nMachine\nclassifier\n\nfi\n\nthe oracle\n\nfe\n\na Assume |X| = 10, g; =\n@ G2 Cy F C2\n\nP(adversarial samples)\nFigure 7: When f; is not continuous a.e., the strong-robustness of f; is determined by both g; and cy\nWe assume c; and ce as linear classification functions. This figure shows when (1) sample space X i:\nfinite, (2) f; learns a wrong decision boundary and (3) the probability of test samples around f;\u2019:\ndecision boundary is large, f; is not strong-robust against adversarial examples. However, we wan\nto emphasize that the situation is very rare for a well-trained classifier f,.\nED NPENET TO FINS SANT PAN Py PANGAN FAK JP NS\n\n(2, \u00a9) fol) = folx ae me AG a < ite fila}\n\n# (2, 2\u2019) | fo(x x\u2019) &d2(g2(x), go(x\")) < d2}\nBased on Eq. 4), when f; is not continuous a.e., the strong-robustness of f, is determined by\nboth g; and c;. Figure|7/shows an exemplar case in which X has only ten samples (i.e. |X| = 10)\nWe assume the learned f; and the oracle f2 derive the same feature space, i.e., Xj; = Xo. And\nwe also assume f performs the classification very badly because the decision boundary (by cy)\non X, is largely different from the decision vow ba. {IT on X. The probability of \"adversarial\nexamples\" in this case an be calculated by using Eq. (11.4). We get P(fi(x) 4 fi(2\u2019)|fo(x) =\nfo(x\u2019), di(gi(x), g1(2\u2019)) < 61) = 28 = 0.6.\nClearly in this case, ci matters for the strong-robustness (when f; is not a.e. continuous). This figure\nindicates that when (1) sample space X is finite, (2) f; learns a wrong decision boundary and (3) the\nprobability of test samples around /;\u2019s decision boundary is large, f; is not strong-robust against\nadversarial examples. However, we want to point out that this situation is very rare for a well-trained\nclassifier f,.\nThis is exactly the proportion of those pairs of points for which f; classifies them into different\nclasses and f2 treats them as similar and \"same-class\" samples. For this case, both g; and c; matter\nfor the strong-robustness of f;. See Appendix Section[IT.2]for an example showing how c; makes f7\nnot strong robust.\nTable 5: Accuracy of the deep residual network(He et al.|/2015) obtained from two noise-perturbec\ntesting cases. The second column shows the result on randomly perturbed samples, and the thirc\ncolumn shows the result on adversarially perturbed samples.\nAttack power. \u201cTest accuracy\n\nTest accuracy\n\n(defined in | on randomly | on adversari-\nEq. (12-6)) perturbed sam- | ally perturbed\nples samples\n[0 0.9411 0.9411\n[1 0.9409 0.5833\n[5 0.9369 0.3943\n| 10 0.9288 0.3853\nFor cases when | is not continuous a.e., obtaining more samples is clearly a good way to learn a\nbetter decision boundary that might improve the adversarial robustness of the classifier at the same\ntime."}, {"section_index": "8", "section_name": "12 MORE ABOUT DNNS\u2019 ROBUSTNESS AGAINST ADVERSARIAL SAMPLES", "section_text": "Ji): fa(-) is a DNN classifier with multiple layers, including linear perceptron layers, activation\nlayers, convolutional layers and softmax decision layer.\n\n(X1, d1): X1 denotes the feature space discovered by the layer right before the last fully connected\nlayer. This feature space is automatically extracted from the original image space (e.g., RGB\nrepresentation) by the DNN. (X, d{) is defined by d; using Eq. (10.3).\n\n(X2, dz): X2 denotes the feature space that oracle (e.g., human annotators) used to decide ground-\ntruth labels of training images. For example, a human annotator needs to recognize a hand-written\ndigit \u201c0\u201d. X> includes what patterns he/she needs for such a decision. (X, d4) is defined by dz\n\nusing Eq. (10.3)\n2.1. MORE ABOUT ARE STATE-OF-THE-ART DEEP NEURAL NETS STRONG-ROBUST 7\nWe can observe some properties of d; through experimental results. Table|5|Table [6}Table [7] and\nTable |8] show properties of d; (and d{) resulting from performing testing experiments on four\nstate-of-art DNN networks.\nIn Table[9} the model we use is a 200-layer residual network (He et al} 20 5) trained on Imagene\ndataset by Facebool|] We generate two types of test samples from 50000 image\nin the validation set of Imagenet: (1) 50000 randomly perturbed images. The random perturbation\non each image are generated by first fixing the perturbation value on every dimension to be th\nsame, and then randomly assigning the sign on every dimension as + or \u2014 (with probability 1/2). I\nthis way, the size of the perturbation can be described by ||z \u2014 x\u2019||,. that we name as the level c\nattacking power ( later defined in Eq. (12.6). (2) 50000 adversarially perturbed images. We use th\nfast-gradient sign method (introduced in Section[8.2) to generate such adversarial perturbations o:\neach seed image. The \u201cattacking power\u201d of such adversarial perturbations uses the same formul\nas Eq. (12.6). The first column of Table[9]shows different attack powers (Eq. {12-6)) we use in th\nexperiment. The second column shows the accuracy of running the DNN model on the first group c\nimage samples and the third column shows the accuracy of running the DNN model on the secon\ngroup of image samples.\nTable [6|Table and Table [8] repeat similar experiments on three other DNN models: overfea'\nnetwork 2013), the residual network(He et al.|[2015) and the VGG model (Simonyar\n\n& Zisserman||2014). The conclusion is consistent across all four models.\nhttps ://github.com/facebook/fb.resnet.torch\nTable 6: Accuracy of the overfeat network(Sermanet et al.|/2013) obtained from two noise-perturbec\n\ntesting cases. The second column shows the result on randomly perturbed samples, and the thirc\ncolumn shows the result on adversarially perturbed samples.\nAttack power :\n\nTest accuracy\n\nTest accuracy\n\n(defined in | on randomly | on adversari-\n\nEq. (12-6)) perturbed sam- | ally perturbed\nples samples\n\n0 0.7944 0.7944\n\nI 0.7923 0.5922\n\n5 0.7844 0.4270\n\n10 0.7762 0.3485\nAttack power\n\nTest accuracy\n\nTest accuracy\n\n(defined in | on randomly | on adversari-\n\nEq. {12:6)) perturbed sam- | ally perturbed\nples samples\n\n0 0.9431 0.9431\n\nI 0.9431 0.9294\n\n5 0.9429 0.6815\n\n10 0.943 0.2961"}, {"section_index": "9", "section_name": "[2.2 CONNECTING PREVIOUS STUDIES HARDENING DNNS", "section_text": "Multiple hardening solutions (Zheng et al.||2016}|Miyato et al.| 2016} Lee et al.| 2015) exist ir\nthe DNN literature. They mostly aim to learn a better g; by minimizing different loss function:\nLy, (x, 2x\u2019) so that when d2(g2(2), g2(x\u2019)) < \u00a2, this loss Ly, (x, x\u2019) is small. This might improve th\n\nthe topological equivalence (or finer topology). Two major variations exist among related methods\nthe choice of L, (x, x\u2019) and the way to generate pairs of (x, x\u2019).\nre Se\n\nEN\n\ne Choice of loss function L ;, (2, x\u2019): Siamese training (G) Seton 2 on (Cae [12.4) and (Lee et al. Hy\nLy, (a, 2\u2019) = di (gu (x x), g91(x\u2019)). Siamese training (F) chooses L f, \") = dist(fi itt\nwhere dist(-, -) is a distance ie measuring the difference between f(x) and fi (x yt i 2\ncontinuous a.e., when d; (g(x), g1(x\u2019)) is small > we get dist( f(x), fi(a\u2019)) is small. Vince\nthe reverse direction may not hold. Therefore, Ly, (x, 2') = dist fo: fi(x\u2019)) may not work for\ncases.\n\ne Generating pairs of (x, x\u2019): Another variation is the way of generating pairs of (x, x\u2019) so that\nd2(g2(a), g2(2\")) is small. There exist two common ways. One is generating x\u2019 by adding a\nrandom (e.g. Gaussian) perturbation on z. The other one is generating the adversarial perturbation\nto get x\u2019 from x.\nBesides, (Zheng et al.||2016) uses Ly, (x, x\u2019) = KL(fi (2), f1(2\u2019)) and uses it as a regularization\n\nterm adding onto the original training loss function. Its samples x\u2019 are generated from original\nsamples x adding a small Gaussian noise. { to et al.||2016) uses the similar loss function as\n\nMiya\net al. 2016). But (Miyato et al.| 2016) uses adversarial perturbed x\u2019 from x. (Lee\n\n2015) uses Ly, (x, a\u2019) di (gi(x), 91(\u201d\u2019)) and x's are generated xs by adding a small\naussian noise. Recently proposed adversarial training (Goodfellow et al.| 2014} Kurakin et al.\n2016) uses Ly, (x, x\") = L( f(a\u2019), fo(x)) and uses adversarial perturbed x\u2019 from x. These studies\n\nare summarized and compared in Table/4]\nTable 7: Accuracy of the residual network(He et al.|/2015) obtained from two noise-perturbed testing\ncases in CIFAR-10 dataset (Krizhevsky & Hinton} |2009). The second column shows the result\non randomly perturbed samples, and the third column shows the result on adversarially perturbed\nsamples.\nTable 8: Accuracy of the wide residual network(Zagoruyko & Komodakis| |2016) obtained fron\ntwo noise-perturbed testing cases in CIFAR-10 dataset (Krizhevsky & Hinton\\|2009). The secon\n\ncolumn shows the result on randomly perturbed samples, and the third column shows the result 1\nadversarially perturbed samples.\nAttack power\n\nTest accuracy\n\nTest accuracy\n\n(defined in | on randomly | on adversari-\n\nEq. (12-6}) perturbed sam- | ally perturbed\nples samples\n\n0 0.953 0.953\n\nI 0.953 0.8527\n\n5 0.953 0.4718\n\n10 0.953 0.2529\nTable 9: Accuracy of the VGG model obtained from two noise\nperturbed testing cases in CIFAR-10 dataset (Krizhevsky & Hinton||2009). The second column show:\nthe result on randomly perturbed samples, and the third column shows the result on adversarial}\nperturbed samples.\nAttack power\n\nTest accuracy\n\nTest accuracy\n\n(defined in | on randomly | on adversari-\n\nEq. (12-6}) perturbed sam- | ally perturbed\nples samples\n\n0 0.9395 0.9395\n\nI 0.938 0.7807\n\n5 0.938 0.3767\n\n10 0.9377 0.2092\nOur theoretical analysis indicates that strong-robustness is a strong condition of machine learning\nclassifiers and requires thorough understanding of oracle. Since many state-of-the-art learning models\nincluding many DNNs, are not strong-robust, it is important to understand and quantify how far they\nare away from strong-robustness.\nThis section proposes a new evaluation measure \u201cAdversarial Robustness of Classifiers (ARC)\u201d to\nquantify how far a classifier is away from the strong-robustness. This quantitative measure considers\nboth the predictor f; and the oracle f2. By design, a classifier (f)\u2019s ARC achieves the maximum (1\nsince ARC is rescaled to [0, 1]) if and only if f; is strong-robust (see Theorem (12.3)).\nWe name such situations as \"weak-robustness\" and propose a quantitative measure to describe how\nrobust a classification model is against adversarial examples. The proposed measure \u201cAdversarial\nRobustness of Classifiers (ARC)\u201d considers both the predictor f; and the oracle f2 (introduced in\n\ni By design, a classifier (f')\u2019s ARC achieves the maximum (1 since ARC is rescaled to\n\n[0, 1]) if and only if f; is strong-robust against adversarial examples and is based on the expectation\nof how difficult it is to generate adversarial examples."}, {"section_index": "10", "section_name": "Definition 12.1. Adversarial Robustness of Classifiers (ARC)", "section_text": "By adding the constraint do(x,x') < 59 into Eq. (our general definition of adversarial examples,\nand taking the expactation of dy between adversarial example and seed sample, we define a measure\nTwo recent studies (Moosavi-Dezfooli et al. Papernot et al. ) propose two similar\nmeasures both assuming d2 as norm functions, but do not consider the importance of an oracle. More\n\nimportantly, (Papernot et al.|/2015b|\n\nb) does not provide any computable way to calculate the measure\nIn (Moosavi-Dezfooli et al.||2015), the measure is normalized by the size of the test samples, while\n\nno evidence exists to show that the size of nerturhbation is related to the size of test samples.\nThis motivates us to design a computable criteria to estimate Definition ( For instance, for\nimage classification tasks, we can choose dz = || - ||.o as an example. Then in E ), to estimate\nof E[||2 \u2014 x\u2019||,.], we need to make some assumptions. Assume that there exists a threshold 53, that\nany perturbation larger than 52 will change the classification of oracle f2. That is if || \u2014 \"||. > de,\nthen fo(x) 4 f2(x\u2019). More concretely, for image classification tasks, as the input space is discrete\n(with every dimension ranging from 0 to 255). ARC can be estimated by the following Ea. (12.2):\n62-1\n\nARCoo(fi, fa) =E|l| # \u2014 2\" loo] = $7 iP(|| x \u2014 2\" loo= et\ni=l\n\n+ doP(fi(a) = fal(t),\u00a5 |] et lloo< 52).\n\nx\u2019 = argmin do(z, t)\ntex\nARC(fi, f2)\n\nARCA(f) = Accuracy(f1) x %\nTheorem 12.3. f; is strong-robust against adversarial examples if and only if ARC(f,)/69 = 1.\nProof. If ARC(f;)/62 = 1, then based on Definition 12.1), we havi\nx\u2019 =argmin d2(z, t)\ntex\n\nSubject to: fi (x) # fi(t)\ndo(a,t) < d2\nThe fact that previous measures neglect the oracle f2 leads to a severe problem: the generated\nadversarial examples are not necessarily valid. This is because if the size of perturbation is too large,\noracle f2 may classify the perturbed sample into a different class (different from the class of the seed\nsample).\nAs we have discussed in the Section|4| both accuracy and robustness are important properties in\ndetermining whether a classification model is preferred or not. Therefore we combine accuracy and\nARC into the following unified measure ARCA:\n12.4. USING \u201cSIAMESE ARCHITECTURE\u201d TO IMPROVE DNNS\u2019 ADVERSARIAL ROBUSTNESS\nVa, x! \u20ac X, if do(go(x), go(x\u2019)) <\u20ac,\nargmin d1(gi (a; w), gi(a\u2019; w))\nThis essentially forces the DNN to have the finer topology between (1, d;) and (X2, d2) by learning\na better g,. We name the strategy minimizing the loss defined in Eq. as \"Siamese Training\"\n\nbecause this formulation uses the Siamese architecture (Bromley et al.|]1993), a classical deep:\nlearning approach proposed for learning embedding. We feed a slightly ate d input \u00ab\u2019 together\nwith its original seed x to the Siamese network which contains two copies (sharing the same weights\u2019\nof a DNN model we want to improve. By penalizing the difference between middle-layer (g; (-))\noutputs of (x, x\u2019), \"Siamese Training\" can push two spaces (X, d{,) versus (X2, d5) to approach finer\ntopology relationship, and thus increase the robustness of the model. This can be concluded from\nFigure|8] By assuming d2(g2(2), g2(2)) equals (approximately) to ||A(2, x\u2019)||, previous studies\n(summarized in Table|2) Pate assume dz is a norm function || - ||. Because for a pair of inputs\n(x, x\u2019) that are close to each other (i.e., || \u2014 x\u2019|| is small) in (X, || - ||), Siamese training pushes\nthem to be close also in (X1,d1) . As a result, this means that a sphere in (X,,d;) maps to a\nnot-too-thin high-dimensional ellipsoid in (X, || - ||). Therefore the adversarial robustness of DNN\nmodel after Siamese training may improve. In experiments, we choose Euclidean distance || - ||2 for\nd,(-) (however, many other choices are possible).\nYatasets: Currently, we are using the following 2 image datasets to evaluate our model\ne MNIST: MNIST, released in (LeCun et al.}/1998) includes a task to classify handwritten digits. It\nhas a training set of 60,000 examples, and a test set of 10,000 examples. Each example is a 32x32\n\npixel black and white image of handwritten digit.\n\ne CIFAR-10: CIFAR-10 is an image classification dataset released by (Krizhevsky & Hinton 2009)\nThe training set contains 50,000 32x32 color images in 10 classes, and the test set contains 10,000\n32x32 color images. VGG model: We choose a VGG model (Simonyan & Zisserman| 2014) asa\nbase DNN model. The VGG model in our experiment has 16 weight layers (55 layers in total).\nBaseline: Three different hardening strategies are compared through testing on adversarial examples\n(details in Section[12.2}: (1) original model; (2) stability training (Zheng et al. (3) Siamese\ntraining (alone); (4) adversarial training 2016) uses adversarial\nperturbed x\u2019 and original samples x to train a DNN model.\nThe first column of Table[IO|and Table[I Ishows different levels of attack power (defined in Eq. {12.6)).\nTest accuracy reported in Figure [9Ja), Figure[10[a), Table[I0}and Table[Ii]shows different hardening\n\napproches can increase the effectiveness of the adversarial attacks. Details of our experimental set-up\n\nand datasets are included in Section\n\u2018ATT: Stability training was shown to improve the model robustness against Gaussian noise in\net al. ord}. Differently, our experiments focus on testing a learning model\u2019s robustness against \u201cadversarial\nperturbation\u201d. The sole purpose of including this baseline is to show where state-of-art hardening strategies are\nin our experimental setting.\nFigure 8: Sketch of Siamese training. Inputs are pairs of seed sample and their randomly perturbe\nversion, while we suppose the d2 distance between the pair is small. By forwarding a pair into th\nSiamese network and penalizing the outputs of the pair, this training intuitively limit the d, distanc\nbetween two similar samples to be small. Backpropagation is used to update the weights of th\nnetwork.\ne Test accuracy: We use top-1 test accuracy as the performance metric. It is defined as the number:\nof successfully classified samples divided by the number of all test samples. The base model\nachieves accuracy when there\u2019s no adversarial attack.\n\ne ARC (Eq. (12.2)) : We use ARC to measure the adversarial robustness of each model. 17 is chosen\nto be 10.\n\ne ARCA: (Eq. (12.3)) : We use ARCA to measure the total performance of each model.\nWe generate adversarial examples using the fast gradient sign method, in which the power of the\nadversary attack can be easily controlled. By controlling the power of fast-sign attacks, we can obtain\na complete view of how the accuracy changes according to different attack powers.\nIn the following analysis, the attack power is defined as:\nFor image classification tasks, we control the perturbed sample to be still in the valid input space, so\nthat every dimension of the perturbed samples is in the range of integers between 0 and 255."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Marco Barreno, Blaine Nelson, Anthony D Joseph, and JD Tygar. The Security of Machine Learning\nMachine Learning, 81(2):121-148, 2010.\nLoss\nfunction\n\nWg1@) \u2014 gi\u2019) I?\n\nes\n\nNetwork\n\nShared parameters\n\nWw\n\na)\n\n91%)\n\ndy (91), 1D)\nto be small\n\nFiner\nTopology Assume\n\nLenn\n\ninput\n\nBefore Siamese training: =o\n\n\u2018N\nd,(a,b) large ~~ = -\n\n7 Deep Neural Nets\n\n|\nx\n\nI\nx\n\n(Xp, dz) = (X, IID\nFOOL ON\n\nis small\n\nOdi)\n\na ae\n\nAdversarial direction, =\n\na\noH\n\n,\n\n\u00a9 Cassi\n(1) ctass2\n\nWH cass 3\n\nIID)\n\nAfter Siamese training:\nB\n\nt\n(Close\n\n1, oe \u00a9\n\n7 Deep Neural Nets\nx 7\nd,(a,b) large ~~ = >\n\nQl)\n\noH\nAdversarial\na\n\n\u00a9 Cassi\n(7) ctass2\n\nHH cass 3\nMarco Barreno, Blaine Nelson, Russell Sears, Anthony D Joseph, and J Doug Tygar. Can machine\nlearning be secure? In Proceedings of the 2006 ACM Symposium on Information, computer\n\nand communications security, pp. 16-25. ACM, ACM, 2006. URL http://dl.acm.org/\ncitation.cfm?id=1128824\nStabi\n\nAttack power (Eq. (12.6)) | Original model ity Training | Siamese Training\n0 93.95% 93.81% 93.96%\n1 78.07% 78.01% 93.88%\n2 61.38% 60.34% 90.13%\n3 50.07% 49.21% 86.73%\n4 42.86% 41.51% 83.85%\n5 37.67% 36.33% 81.21%\n6 33.60% 32.08% 78.61%\n7 29.70% 28.09% 76.09%\n8 26.23% 25.11% 73.21%\n9 23.53% 22.43% 69.67%\n10 20.92% 20.25% 65.98%\nARC 4.9798 4.8717 8.9332\nARCA 0.4253 0.4155 0.7631\nAttack power\n\nOriginal model\n\nAdversarial Training\n\nStability Training\n\nSiamese Training\n\n0 98.98% 98.96% 99.06% 99.03%\n1 98.75% 98.84% 98.94% 98.84%\n2 98.44% 98.63% 98.60% 98.47%\n3 98.10% 98.41% 98.29% 98.16%\n4 97.56% 98.12% 97.80% 97.78%\n5 97.09% 97.80% 97.47% 97.26%\n6 96.23% 97.38% 97.01% 96.56%\n7 95.43% 96.96% 96.23% 95.81%\n8 94.22% 96.47% 95.37% 95.01%\n9 92.95% 96.06% 94.49% 93.89%\n10 91.53% 95.57% 93.30% 92.76%\nARC 10.5928 10.732 10.6656 10.6357\nARCA 0.953159 0.96549 0.960486 0.957503\nBattista Biggio, Giorgio Fumera, and Fabio Roli. Adversarial pattern classification using multi-\nple classifiers and randomisation. In Structural, Syntactic, and Statistical Pattern Recognition,\n\npp. 500-509. Springer, 2008. ve http://link.springer.com/chapter/10.1007/\n978-3-540-89689-\nBattista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndi\u00e9, Pavel Laskov, Giorgio\nGiacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Machine\nLearning and Knowledge Discovery in Databases, pp. 387-402. Springer, 2013.\nBattista Biggio, Samuel Rota Bulo, Ignazio Pillai, Michele Mura, Eyasu Zemene Mequanint, Marcello\nPelillo, and Fabio Roli. Poisoning complete-linkage hierarchical clustering. In Structural, Syntactic,\nand Statistical Pattern Recognition, pp. 42\u201452. Springer Berlin Heidelberg, 2014.\nMariusz Bojarski, Anna Choromanska, Krzysztof Choromanski, and Yann LeCun. Differentially-and\nnon-differentially-private random decision trees. arXiv preprint arXiv: 1410.6973, 2014.\nJane Bromley, James W Bentz, L\u00e9on Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard\nSackinger, and Roopak Shah. Signature verification using a \u201csiamese\u201d time delay neural network\nInternational Journal of Pattern Recognition and Artificial Intelligence. 7(04):669\u2014688. 1993.\nTable 10: Test accuracy for different training strategies on CIFAR-10 dataset.\nTable 11: Test accuracy for different training strategies on MNIST dataset.\nAccuracy\n\n(a) Attack Power vs. Accuracy 0 ) ARC & ARCA value among different methods\n\nHBARCEIARCA\n0.9 0.8\n08 07\n0.7\nSoe\n06 B\n0.5\n(3)\n0.5 [va\n$04\n0.4\n, 2\n0.3\n0.3 <\n02 0.2\n0.1 -\u00a9 Original 0.1\n\u201c' | \u00a9 Stability Training\nSiamese Training 0\n\n0 2 4 6 8 10 0, S&\nAttack Power \u00a2\n\nS\nFigure 9: Result of CIFAR-10: (a) Test accuracy under adversarial example attacks: three differen\ncolors for three different training strategies. (Details in Section[12.2) We don\u2019t include the resul|\nof adversarial training because previous adversarial training can\u2019t be used on networks with batch\nnormalization. Some tricks of training such networks are released in a recent paper (Kurakin et al\n2016) (b) ARC and ARCA for three different training strategies under adversarial example attac\nAccuracy\n\n(a) Attack Power vs. Accuracy\n\n0.9!\n\n0.98\n\n0.97\n\n\u00b0\n\u00a9\noO\n\ned\n\u00a9\na\n\n9\n\u00a9\nB\n\n0.93\n\n0.92\n\u00a9 Original\n\n-\u00a9 Adversarial Training\n\u00a9 Stability Training\n-#-Siamese Training\n\n0 2 4 6 8 10\nAttack Power\n\n0.9\n\n(b) ARC & ARCA value among different methods\n\nARC & ARCA value\n\u00b0 2\u00b0 \u00b0\nES a &\n\noS\nio\n\nBBARCEIARCA\n\n&.\n%, %, Ss,\n& Up iO\n\u00ae. %y, \u00ae&\n. +> i)\nY Zs, >.\n>. %. %\n2, %, \u201cyy\nyy & %\u00a9\nFigure 10: (a) Test accuracy under adversarial example attacks on MNIST dataset: four different\ncolors for four different training strategies. (Details in Section|12.2) (b) ARC and ARCA for fout\ndifferent training strategies under adversarial example attacks.\nNicholas Carlini and David Wagner. Defensive distillation is not robust to adversarial example:\narXiv preprint arXiv: 1607.04311, 2016a.\nNicolo Cesa-Bianchi and Gabor Lugosi. Prediction, learning, and games. Cambridge University\nPress, 2006.\nRainer Dahlhaus. Fitting Time Series Models to Nonstationary Processes. The Annals of Statistics\n25(1):1\u201437. 1997.\nJames J DiCarlo and David D Cox. Untangling invariant object recognition. Trends in cognitive\nsciences, 11(8):333-341, 2007.\nJames J DiCarlo, Davide Zoccolan, and Nicole C Rust. How does the brain solve visual objec\nrecognition? Neuron, 73(3):415\u2014434, 2012.\nJohn C Duchi, Michael I Jordan, and Martin J Wainwright. Privacy aware learning. Journal of th\nACM (JACM), 61(6):38, 2014.\nRichard O Duda, Peter E Hart, and David G Stork. Pattern classification. John Wiley & Sons, 2012\nAlhussein Fawzi, Omar Fawzi, and Pascal Frossard. Fundamental limits on adversarial robustness\nIn Proceedings of ICML, Workshop on Deep Learning, number EPFL-CONF-214923, 2015.\nIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversaria\n\nexamples. arXiv preprint arXiv: 1412.6572, December 2014. URL|http://arxiv.org/abs,\n1412.6572) arXiv: 1412.6572.\nKathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel.\nAdversarial perturbations against deep neural networks for malware classification. arXiv preprint\narXiv: 1606.04435, 2016.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imag:\nrecognition. arXiv preprint arXiv: 1512.03385, 2015.\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale\nhierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009.\nIEEE Conference on, pp. 248-255. IEEE, 2009.\nAwni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger,\nSanjeev Satheesh, Shubho Sengupta, Adam Coates, and others. DeepSpeech: Scaling up end-to-\n\nend speech recognition. arXiv preprint arXiv:1412.5567, 2014. URL\nJohn L Kelley. General topology. Springer Science & Business Media, 1975.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deer\nConvolutional Neural Networks. In Advances in Neural Information Processing Systems, pp\n1097-1105, 2012.\nTaehoon Lee, Minsuk Choi, and Sungroh Yoon. Manifold regularized deep neural networks using\nadversarial examples. arXiv preprint arXiv:1511.06381, 2015.\nBo Li and Yevgeniy Vorobeychik. Feature cross-substitution in adversarial classification. In Advances\nin Neural Information Processing Systems, pp. 2087-2095, 2014.\nShike Mei and Xiaojin Zhu. The security of latent dirichlet allocation. 2015a.\nShike Mei and Xiaojin Zhu. Some submodular data-poisoning attacks on machine learners. 2015b\nTakeru Miyato, Shin-ichi Maeda, and Koyama Masanori. Distributional smoothing with virtua\nadversarial training. JCLR\u2019 16, 2016.\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and\naccurate method to fool deep neural networks. arXiv preprint arXiv:1511.04599, 2015.\nAnh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence\npredictions for unrecognizable images. In CVPR. IEEE. 2015.\nNicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. Towards the science of\nsecurity and privacy in machine learning. arXiv preprint arXiv: 1611.03814, 2016b.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image\nrecognition. arXiv preprint arXiv: 1409. 1556, 2014.\nArunesh Sinha, Debarun Kar, and Milind Tambe. Learning adversary behavior in security games\nA pac model perspective. In Proceedings of the 2016 International Conference on Autonomous\nAgents & Multiagent Systems, pp. 214-222. International Foundation for Autonomous Agents and\nMultiagent Systems, 2016.\nBen Stoddard, Yan Chen, and Ashwin Machanavajjhala. Differentially private algorithms fo\nempirical machine learning. arXiv preprint arXiv: 1411.5428, 2014.\nWilliam Uther and Manuela Veloso. Adversarial reinforcement learning. Technical report, Technicz\nreport, Carnegie Mellon University, 1997. Unpublished, 1997.\nPascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and\ncomposing robust features with denoising autoencoders. In Proceedings of the 25th international\nconference on Machine learning. pp. 1096-1103. ACM. 2008.\nHuang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, and Fabio Roli. Is\nfeature selection secure against training data poisoning? In Proceedings of the 32nd International\nConference on Machine Learning (ICML-15), pp. 1689-1698, 2015.\nPengtao Xie, Misha Bilenko, Tom Finley, Ran Gilad-Bachrach, Kristin Lauter, and Michael Naehrig\nCrypto-nets: Neural networks over encrypted data. arXiv preprint arXiv: 1412.6181, 2014.\nEric P. Xing, Michael I. Jordan, Stuart J Russell, and Andrew Y. Ng. Distance metric learning with\napplication to clustering with side-information. In S. Becker, S. Thrun, and K. Obermayer (eds.),\nAdyances in Neural Information Processine Systems 15. pp. 521\u2014528. MIT Press. 2003.\nSergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv: 1605.07146\n2016.\nFei Zhang, Patrick PK Chan, Battista Biggio, Daniel S. Yeung, and Fabio Roli. Adversarial Featur\nSelection against Evasion Attacks. JEEE Transactions on Cybernetics, PP(1), 2015.\nStephan Zheng, Yang Song, Thomas Leung, and Ian Goodfellow. Improving the robustness of deer\nneural networks via stability training. arXiv preprint arXiv: 1604.04326, 2016."}]
H12GRgcxg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "The presence of class label noise inherent to training samples has been reported to deteriorate th\nperformance of even the best classifiers in a broad range of classification problems (Nettleton et al\n(2010), Pechenizkiy et al.|(2006), {Zhu & Wu (2004)). Noisy labels also tend to be more harmfu\nthan noisy attributes Pee (2004)). Noisy data are usually related to the data collectior\nprocess. Typically, the labels used to train a classifier are assumed to be unambiguous and accurate\nHowever, this assumption often does not hold since labels that are provided by human judgment\nare subjective. Many of the largest image datasets have been extracted from social networks. Thes:\nimages are labeled by non-expert users and building a consistent model based on a precisely labelec\ntraining set is very tedious. Mislabeling examples have been reported even in critical application:\nsuch as biomedical datasets where the available data are restricted (1999p). A very\ncommon approach to noisy datasets is to remove the suspect samples in a preprocessing stage or havi\n\nthem relabeled by a data expert (Brodley & Fried]|(1999)). However, these methods are not scalabk\n\nand may run the risk of removing crucial examples that can impact small datasets considerably."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Variants that are noise robust have been ar OTA | for the most common classifiers such as logistic-\nregression and SVM (Fr\u00e9nay & Verleysen| POT), Pakramate & Kab\u00e9n|(201% ,/Beigman & Klebanov\n(2009)). However, Oeste ers based on label os robust algorithms are still affected by label noise.\nFrom a theoretical point of ere showed that most loss functions are not com-\npletely robust to label noise. Pan proposed a generic unbiased estimator for binary\nclassification with noisy labels. They developed a surrogate cost function that can be expressed by\na weighted sum of the original cost functions, and provided asymptotic bounds for performance.\nGrandvalet & Bengio (2005) addressed the problem of missing labels that can be viewed as an ex-\ntreme case of noisy label data. They suggested a semi-supervised algorithm that encourages the\nclassifier to predict the non-labeled data with high confidence by adding a regularization term to the\n\ncost function. The problem of classification with label noise is an active research area. Comprehen-\nsive up-to-date reviews of both the theoretical and applied aspects of classification with label noise\n\ncan be found in/Fr\u00e9nay & Kaban|(2014) and/Fr\u00e9nay & Verleysen| (2014).\nAssume we want to train a multi-class neural-network soft-classifier p(y = i|x;w) where x is the\nfeature vector, w is the network parameter-set and i is a member of the class-set {1,...,k}. We\nfurther assume that in the training process we cannot directly observe the correct label y. Instead,\nwe only have access to a noisy version of it denoted by z. Here we follow the probabilistic modeling\nand the EM learning approach described in|Bekker & Goldberger! 2016). In this approach noise\ngeneration is assumed to be independent of the features and is modeled by a parameter 6(i, 7) =\np(z = jly = i). The noise distribution is unknown and we want to learn it as part of the training\nphase. The probability of observing a noisy label z given the feature vector x is:\nk\nP(z = jl; w,8) = >> plz = jly = 6 O)ply = ley w\ni=1\nwhere k is the number of classes. The model is illustrated in the following diagram:\nIn the training phase we are given n feature vectors 21,...,%, with the corresponding noisy la-\nbels z1,...,2n which are viewed as noisy versions of the correct hidden labels y1,..., yn. The log-\nlikelihood of the model parameters is:\nBased on the training data, the goal is to find both the noise distribution @ and the Neural Network\nparameters w that maximize the likelihood function. Since the random variables y;,..., y, are hid-\nden, we can apply the EM algorithm to find the maximum-likelihood parameter set. In the E-step of\nn spite of the huge success of deep learning there are not many studies that have explicitly attempted\no address the problem of Neural Net (NN) training using data with unreliable labels.\n\nintroduced a single noise parameter that can be calculated by adding a new regularization\nerm and cross validation. proposed a more realistic noise model that de-\nends on the true label. However, they only considered the binary classification case.\nx Fergus recently proposed adding a constrained linear layer at the top of the softmax layer,\nind showed that only under some strong assumptions can the linear layer be interpreted as the tran-\nition matrix between the true and noisy (observed) labels and the softmax output layer as the true\nrobabilities of the labels. suggested handling the unreliability of the training data\nabels by maximizing the likelihood function with an additional classification entropy regularization\nerm.\n[he correct unknown label can be viewed as a hidden random variable. Hence, it is natural to apply\n\nhe EM algorithm where in the E-step we estimate the true label and in the M-step we retrai\n\n1etwork. Several variations of this paradigm\n\nthe\n\nhave been proposed (e.g. [Minh & Hinton] (\n\n3ekker oldberger| (2016). However, iterating between EM-steps and neural network training\n\nloes not scale well. In this study we use latent\nikelihood score function within the framework\n\nvariable probabilistic modeling but we optimize the\nof neural networks. Current noisy label approaches\n\nissume either implicitly or explicitly that, given the correct label, the noisy label is independen\n\nyf the feature vector. This assumption is pro!\n\nbably needed to simplify the modeling and derive\n\nipplicable learning algorithms. However, in many cases this assumption is not realistic since <\nvrong annotation is more likely to occur in cases where the features are misleading. By contrast\nyur framework makes it easy to extend the proposed learning algorithm to the case where the nois\u00a2\ns dependent on both the correct label and the input features. In the next section we describe a mode\n\normulation and review the EM based approac\n\nh. In Section 3 we described our method which is\n\nyased on adding another softmax layer to the network and in Section 4 we present our results.\nn k\nL(w,0) = S~log(> p(zelye = 4; 0)p(ye = ilar; w))\nt=1 i=l\neach EM iteration we estimate the hidden true data labels based on the noisy labels and the curren\nparameters:\nChi = P(t = ilae, Zt; Wo, 90),\nCuil {z=} i,j \u20acf1,..,k\n6 (i,j) = Veciesa i,j \u20ac {1,..., k}\n\nCi\nThe k x k matrix 6 can be viewed as a confusion matrix between the soft estimates of the true label\n{c,;|\u00a2 = 1,...,k} and the observed noisy labels z;. As part of the EM M-step, to find the updated\nNN parameter w we need to maximize the following function:\nos \u201d\nOu = De (ye = t|r2, 243 Wo, 90) \u2014 Pye = Axe; w)) (22)\nsuch that h is the final hidden layer and wu, ..., uw, are the parameters of the soft-max output layer.\nThe method reviewed here is closely related to the work of[Minh & Hinton] ). They addresse\nthe problem of mislabeled data points in a particular type of dataset (aerial images). The mai\ndifference is that in their approach they assumed that they do not learn the noise parameter. Instea\nthey assume that the noise model can be separately tuned using a validation set or set by hand. Not\nthat even if the true noise parameters are given, we still need the apply the EM iterative procedure\nHowever, this assumption makes the interaction between the E-step and the NN learning muc!\neasier since each time a data-point x, is visited we can compute the p(y, = i|xz, ,) based on th\ncurrent network parameters and the pre-defined noise parameters. Motivated by the need for mode\ncompression, introduced an approach to learn a \u201cdistilled\u201d model by trainin;\na more compact neural network to reproduce the output of a larger network. Using the notatio:\ndefined above, in the second training stage they actually optimized the cost function: S(w) =\naD \u00a9 pl Ye = Axe; wo, 90) log p(ye = i; 24; w) such that wo is the parameter of the large\nnetwork that was trained using the labels 2, ...,z,, w is the parameter of the smaller network an\n6,(i, 7) in this case is a non-informative distribution (i.e. @)(i, 7) = 1/k).\nThere are several drawbacks to the EM-based approach described above. The EM algorithm is\na greedy optimization procedure that is notoriously known to get stuck in local optima. Another\npotential issue with combining neural networks and EM direction is scalability. The framework\nrequires training a neural network in each iteration of the EM algorithm. For real-world, large-scale\nnetworks, even a single training iteration is a non-trivial challenge. Moreover, in many domains\n(e.g. object recognition in images) the number of labels is very large, so many EM iterations are\nlikely to be needed for convergence. Another drawback of the probabilistic models is that they are\nbased on the simplistic assumption that the noise error is only based on the true labels but not on the\ninput features. In this study we propose a method for training neural networks with noisy labels that\nsuccessfully addresses all these problems.\nIn the previous section we utilized the EM algorithm to optimize the noisy-label likelihood functior\n(2). In this section we describe an algorithm that optimizes the same function within the frameworl\nof neural networks. Assume the neural network classifier we are using is based on non-linear inter:\nmediate layers followed by a soft-max output layer used for soft classification. Denote the non-linea\nn k\nS(w) = YOY? cri log p(y = ilar; w)\n\nt=1 i=1\nwhich is a soft-version of the likelihood function of the fully observed case, based on the current\nestimate of the true labels. The back-propagation derivatives of the function (5) that we maximize\nin the M-step are:\nfunction applied on an input \u00ab by h = h(a) and denote the soft-max layer that predicts the true 4\nlabel by:\nPp\n+\nb\n1)\n\np(\ny\n;w)\n1\n1\nexp(ujjh + bij)\nYX exp(ujh + bi)\n\npz = jly =1,2)\npl = jl) = 3 [ple = ily = 6,2)p(y = ile)\nple = jly =) = ee\nSX? exp(bir)\np(z = j|x) Lvl: ily = i)ply = iz)\nWe denote the two noise modeling variants as the complex model (c-model) and the simple\nHereafter we use the notation Whose for all the parameters of the second\nsoftmax layer which can be viewed as a noise adaptation layer.\nIn the training phase we are given n feature vectors x1,...,%, with corresponding noisy labels\n21,++;2n Which are viewed as noisy versions of the correct hidden labels yj, ...,yn. The log-\nlikelihood of the model parameters is:\n(w, Ww,\n\n\u201clYe = 4,215 Wroise)P(Ye = t|24; w))\n\nnoise )\n. - exp(biz)\n(2 = Jly =) = 06.3) = =\nit can easily verified that, by using either the EM algorithm (2) or the s-model neural network\nscheme (12), we are actually optimizing exactly the same function. Thus the neural network with\nthe s-model noise adaptation layer provides an alternative optimization strategy to the EM algorithm\nInstead of alternating between optimizing the noisy model and the network classifier, we considet\nthem as components of the same network and optimize them simultaneously.\nWw\n\n|\n\nWw\n\n|\n\nWhoise\n\n|\n\n: |] ih h, y\n\u2014\u2014\u2014+) non-linear function }\u2014\u2014>] soft-max soft-max\nWw Ww\ny\n\nx : .\n\u2014_ non-linear function\n\n+ soft-max\nFigure 1: An illustration of the noisy-label neural network architecture for the training phase (above)\nand test phase (below).\nwhere w is the network parameter-set (including the softmax layer). We next add another softmax\noutput layer to predict the noisy label z based on both the true label and the input features:\nWe can also define a simplified version where the noisy label only depends on the true label; i.e. we\nassume that labels flips are independent of x:\nSince the noise is modeled by adding another layer to the network, the score S(w, Wacise) Can be\noptimized using standard techniques for neural network training. By setting\nThere are degrees of freedom in the two softmax layer model. Hence, a careful initialization of the\nparameters of the noise adaptation layer is crucial for successful convergence of the network into\na good classifier of the correct labels at test time. We used the parameters of the original network\nto initialize the parameters of the s-model network that contains the noise adaptation level. We can\ninitialize the softmax parameters of the s-model by assuming a small uniform noise:\nsuch that k is the number of different classes. A better procedure is to first train the original NN\nwithout the noise-adaptation layer, ignoring the fact that the labels are noisy. We can then treat the\nlabels produced by the NN as the true labels and compute the confusion matrix on the train set and\nused it as an initial value for the bias parameters:\nsuch that x1, ...,@, are the feature vectors of the training dataset and 21, ..., Z, are the correspondins\nnoisy labels. So far we have concentrated on parameter initialization for the s-model. The strategy}\nthat works best to initialize the c-model parameters is to use the parameters that were optimized fo\nthe s-model. In other words we set linear terms u;; to zero and initialize the bias terms 6,; with the\nvalues that were optimized by the s-model.\nThe computational complexity of the proposed method is quadratic in the size of the class-set. Sup-\npose there are k classes to predict, in this case the proposed methods require k+ 1 sets of softmax\noperations with a size of k each. Hence there are scalability problems when the class set is large. As\nwe explained in the previous paragraph, we initialized the second soft-max layer using the confusion\nmatrix of the baseline system. The confusion matrix is a good estimation of the label noise. Assume\nthe rows of the matrix correspond to the true labels and the matrix columns correspond to the noisy\nlabels. The / largest elements in the i-th row are the most frequent noisy class values when the true\nass value is 7. We can thus connect the i-th element in the first softmax layer only to its | most\nprobable noisy class candidates. Note that if we connect the i-th label in the first softmax only to the\ni-th label in the second softmax layer, the second softmax layer collapses to identity and we obtain\nthe standard baseline model. Taking the / most likely connections to the second softmax layer, we\nallow an additional /\u20141 possible noisy labels for each correct label. We thus obtain a data driven\nsparsifying of the second softmax layer which solves the scalability problem since the complexity\nbecomes linear in the number of classes instead of quadratic. In the experiment section we show\nthat by using this approach there is not much deference in performance.\n\nfe)\nOur architecture, which is based on a concatenation of softmax layers, resembles the hierarchical\nsoftmax approach [Morin & Bengio] (2005) that replaces the flat softmax layer with a hierarchical\nlayer that has the classes as leaves. This allowed them to decompose calculating the probability\nof the class into a sequence of probability calculations, which saves us from having to calculate\nthe expensive normalization over all classes. The main difference between our approach and theirs\n(apart from the motivation) is that in our approach the true-label softmax layer is fully connected\nto the noisy-label layer. |Sukhbaatar & Fergus) (2014) suggested adding a linear layer to handle\nnoisy labels. Their approach is similar to our s-model. In their approach, however, they proposed a\ndifferent learning procedure."}, {"section_index": "2", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we evaluate the robustness of deep learning to training data with noisy labels with\nand without explicit noise modeling. We first show results on the MNIST data-set with injected label\nNote that in the c-model, where the noise is also dependent on the input features, we can still apply\nthe EM algorithm to learn the parameters of the additional noise layer. However, there is no closed-\nform solution in the M-step for the optimal parameters and we need to apply neural-network training\nin the M-step to find the noise-layer parameters.\nAt test time we want to predict the true labels. Hence, we remove the last softmax layer that aims to\net rid of the noise in the training set. We compute the true-label softmax estimation p(y = i|7; w)\nThe proposed architecture for training the neural network based on training data with noisy\nlabels is illustrated in Figure[T]\n\u20ac\nbij = log((1 -\u2014 e)liajy + Toy ltiaiy)\nhe Le.=73P(Ut = = ils)\nS*, p(y = i\\x4)\n\nbij = log(\ntest accuracy\n\nhip\n\ncomplex\nssmple\nReed hard\nReed soft\nbazsine\n\nnoise fraction\n\n(a) 20% dataset\n\ntest accuracy\n\ntest accuracy\n\nas) \u2014 compiex\n= simple\n= Rood hand\n\u2014 Read soft\n\u2014 esesline\nas wo Ap as ao as\n\nnoise fraction\n\n(b) 50% dataset\n\n\u201cAbe\n\ncomplex\nSmale\nRed hard\need soft\nbassline\n\nnoise fraction\n\n(c) 100% dataset\nFigure 2: Test classification accuracy results on the MNIST dataset as a function of the noise level.\nThe results are shown for several training data sizes (20%.50%.100%) of the training subset.\nnoise in our experiments. The MNIST is a database of handwritten digits, which consists of 28 x 28\nimages. The dataset has 60k images for training and 10k images for testing. We used a two hidder\nlayer NN comprised of 500 and 300 neurons. The non-linear activation we used was ReLU anc\nwe used dropout with parameter 0.5. We trained the network using the Adam optimizer (Kingmz\n& Bal (2014)) with default parameters, which we found to converge more quickly and effectively\nthan SGD. We used a mini-batch size of 256. These settings were kept fixed for all the experiment:\ndescribed below. In addition to a network that is based on fully connected layers, we also applied <\nnetwork based on a CNN architecture. The results we obtained in the two architectures were similar\nThe network we implemented is publicly available|'|\nWe generated noisy data from clean data by stochastically changing some of the labels. We con-\nverted each label with probability p to a different label according to a predefined permutation. We\n\nused the same permutation as in|Reed et al.| (2014). The labels of the test data remained, of course,\n\nunperturbed to validate and compare our method to the regular approach.\nWe compared the proposed noise robust models to other model training strategies. The first network\nwas the baseline approach that ignores the fact that the labels of the training data are unreliable.\nDenote the observed noisy label by z and the softmax decision by qi,...,q%. The baseline log-\nlikelihood score (for a single input) is:\n=i} log(qi)\nS= \u00bb lesa\ntest accuracy\n\n40\n\n035\n\n830\n\n\u2018Complex CNN 20%\nComplex CNN 50%\nComplex CNN 100%\nSimple CNN 20%\nSimple CNN 5056\nSimple CNN 100%\nReed hard 20%\nReed hard 50%\nRoed hard 100%\nBaseline CNN 20%\nBaseline CNN 50%\nBaseline CNN 100%\n\nnoise fraction\ntest accuracy\n\n035\n\n\u2018Complex CNN 20%\nComplex CNN 50%\nComplex CNN 100%\nSimple CNN 20%\nSimple CNN 5056\nSimple CNN 100%\nReed hard 20%\nReed hard 50%\nRoed hard 100%\nBaseline CNN 20%\nBaseline CNN 50%\nBaseline CNN 100%\n\n830 oe ro oa\nnoise fraction\nFigure 3: Test classification accuracy results on the CIFAR-100 dataset as a function of the noise\nlevel. The results are shown for several training data sizes (20%,50%,100%) of the training subse\nfor a CNN network architecture).\nWe also implemented two variants of the noise robust approach proposed by (2014)\nThey suggested a soft version\nBS \u2014 (1\u2014 B)H = BY Men iy log(qai) + 1 ~ 8) \u00a5 7 ailog(ai)\nFigure[2|depicts the comparative test errors results as a function of the fractions of noise. The results\nare shown for three different sizes of training data i.e. (20%,50%,100%) of the MNIST training\nsubset. Bootstrapping was used to compute confidence intervals around the mean. For 1000 times\nN = 10 samples were randomly drawn with repeats from the N available samples and mean was\ncomputed. The confidence interval was taken to be the 2.5% and 97.5% percentiles of this process.\nThe results show that all the methods that are explicitly aware of the noise in the labels are bette\nthan the baseline which is the standard training approach. We revalidated the results reported in[Ree\n) and showed that the hard version of their method performs better than the soft version\nIn all cases our models performed better than the alternatives. In most cases the c-model was bette\nthan the s-model. In the case where the entire dataset was used for training, we can see from th\nresults that there was a phase transition phenomenon. We obtained almost perfect classification\nresults until the noise level was high and there was a sudden strong performance drop. Analyzin;\nwhy this effect occurred is left for future research.\nWe next show the results on the CIFAR-100 image dataset/Krizhevsky & Hinton|(2009) which con\n\nsists of 32 x 32 color images arranged in 100 classes containing 600 images each. There are 50\nraining images and 100 testing images per class. We used raw images directly without any pre\norocessing or augmentation. We generated noisy data from clean data by stochastically changin;\nsome of the labels. We converted each one of the 100 labels with probability p to a different labe\naccording to a predefined permutation. The labels of the test data remained, of course, unperturbe:\n\u20180 validate and compare our method to the regular approach. We used a CNN network with tw\nconvolutional layers combined with ReLU activation and max-pooling, followed by two fully con\nnected layers. Figure [3] depicts the comparative test errors results as a function of the fraction\nof noise for three different sizes of training data i.e. (20%,50%,100%) of the CIFAR-100 trainin;\n6S + (1-8) max log (qi)\nIn their experiments they took 3 = 0.8 for the hard version and 6 = 0.95 for the soft version, and\nobserved that the hard version provided better results. Finally we implemented the two variants of\nour approach; namely, the noise modeling based only on the labels (s-model) and the noise modeling\nthat was also based on the features (c-model).\ntest accuracy\n\n&\n\n\u2018Simple CNN sparse 100%\nSimple CMN 100%\n\nSimple CNN sparse 5 50%\nSimple CNN 50%\n\n+ simple cnn 20%\n\nSimple CNV sparse 5 20%\n\nStee foe,\n\ntest accuracy\n\n+ Complex CwN 205\n\nComplex CwN 100%\n\u2018Complex CNN sparse 5 100%\n\u2018Complex CNN sparse 550%\nComplex Cu 50%\n\n\u2018Complex CNN sparse 5208\n\nee\nFigure 4: Test classification accuracy results on the CIFAR-100 dataset as a function of the noise\nlevel. The results of regular and sparse second softmax layers are shown for several training data\nsizes (20%,50%,100%) of the training subset .\nsubset. Bootstrapping was used to compute confidence intervals around the mean in the same way\nas for the MNIST experiment. The results showed that the proposed method works better than the\nalternatives. The simple model consistently provided the best results but when the noise level was\nvery high the complex method tended to perform better.\nWe next report experimental results for the sparse variant of our method that remains efficient even\nwhen the class set is large. We demonstrate this on the case of the CIFAR-100 dataset which consists\nof 100 possible classes. For each class we only took the five most probable classes in the confusion\nmatrix which is used to initialize the model parameter (see Section 3). As can be seen in Figure [4]\nsparsifying the second softmax layer did not not result in a drop in performance"}, {"section_index": "3", "section_name": "5 CONCLUSION", "section_text": "In this paper we investigated the problem of training neural networks that are robust to label noise\nWe proposed an algorithm for training neural networks based solely on noisy data where the nois\u00ab\ndistribution is unknown. We showed that we can reliably learn the noise distribution from the nois)\ndata without using any clean data which, in many cases, are not available. The algorithm can b\u00ab\neasily combined with any existing deep learning implementation by simply adding another softma\u00bb\noutput layer. Our results encourage collecting more data at a cheaper price, since mistaken dat:\nlabels can be less harmful to performance. One possible future research direction would be t\ngeneralize our learning scheme to cases where both the features and the labels are noisy. We showec\nresults on datasets with small and medium sized class-sets. Future research direction would be t\nevaluate the performance and efficiency of the proposed method on tasks with large class-sets."}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "U. Alon, N. Barkai, D. Notterman, K. Gish, S.and D. Mack, and A. Levine. Broad patterns of\ngene expression revealed by clustering analysis of tumor and normal colon tissues probed by\noligonucleotide arrays. Proceedings of the National Academy of Sciences, 96(12):6745-6750,\n1999.\n\nP. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal\nof the American Statistical Association, pp. 138-156, 2006.\n\nE. Beioman and B. B. Klebanov. Learning with annotation noise. In ACL-I/CNLP. 2009.\nA. Bekker and J. Goldberger. Training deep neural-networks based on unreliable labels. In JJ\nInt.l Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2682-2686, 201\n\nC. Brodley and M. Friedl. Identifying mislabeled training data. J. Artif: Intell. Res.(JAIR),\n131-167, 1999.\n\nB. Fr\u00e9nay and A. Kaban. A comprehensive introduction to label noise. In European Symposiun\nArtificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), 201\n\nB. Fr\u00e9nay and M. Verleysen. Classification in the presence of label noise: a survey. IEEE Trans\nNeural Networks and Learning Systems, 25(5):845-869, 2014.\n\nY. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In Advance\nNeural Information Processing Systems (NIPS), 2005.\n\nG.E. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. In NIPS L\nLearning and Representation Learning Workshop, 2014.\n\nB. Jakramate and A. Kaban. Label-noise robust logistic regression and its applications. In Mac!\nLearning and Knowledge Discovery in Databases, pp. 143-158. 2012.\n\nD. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6!\n2014.\n\nA. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Techn\nreport, Computer Science Department, University of Toronto, 2009.\n\nJ. Larsen, L. Nonboe, M. Hintz-Madsen, and K. L. Hansen. Design of robust neural network cl\u00e9\nfiers. In Int. Conf. on Acoustics, Speech and Signal Processing, pp. 1205-1208, 1998.\n\nV. Minh and G. Hinton. Learning to label aerial images from noisy data. In Int. Conf: on Mac\nLearning (ICML), 2012.\n\nF. Morin and Y. Bengio. Hierarchical probabilistic neural network language model. In Aisi\nvolume 5, pp. 246-252, 2005.\n\nN. Natarajan, I. Dhillon, P. Ravikumar, and A. Tewari. Learning with noisy labels. In Advance\nNeural Information Processing Systems (NIPS), 2013.\n\nD. Nettleton, A. Orriols-Puig, and A. Fornells. A study of the effect of different types of nois\u00ab\nthe precision of supervised learning techniques. Artificial intelligence review, 2010.\n\nM. Pechenizkiy, A. Tsymbal, S. Puuronen, and O. Pechenizkiy. Class noise and supervised le\ning in medical domains: The effect of feature extraction. In Computer-Based Medical Syst\n(CBMS), 2006.\n\nS. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich. Training deep ne\nnetworks on noisy labels with bootstrapping. In arXiv preprint arXiv: 1412.6596, 2014.\n\nS. Sukhbaatar and R. Fergus. Learning from noisy labels with deep neural networks. In ai\npreprint arXiv: 1406.2080, 2014.\n\nX. Zhu and X. Wu. Class noise vs. attribute noise: A quantitative study. Artificial Intellige\nReview, 22(3):177-210, 2004.\nthe precision OF supervised learning techniques. Artificial intelligence review, ZULU.\n\nM. Pechenizkiy, A. Tsymbal, S. Puuronen, and O. Pechenizkiy. Class noise and supervised learn-\ning in medical domains: The effect of feature extraction. In Computer-Based Medical Systems\n(CBMS), 2006.\n\nS. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich. Training deep neural\nnetworks on noisy labels with bootstrapping. In arXiv preprint arXiv: 1412.6596, 2014.\n\nS. Sukhbaatar and R. Fergus. Learning from noisy labels with deep neural networks. In arXiv\npreprint arXiv: 1406.2080, 2014.\n\nX. Zhu and X. Wu. Class noise vs. attribute noise: A quantitative study. Artificial Intelligence\nReview, 22(3):177-210, 2004.\nA. Bekker and J. Goldberger. Training deep neural-networks based on unreliable labels. In IEEE\nInt.l Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2682-2686, 2016.\n\nC. Brodley and M. Friedl. Identifying mislabeled training data. J. Artif: Intell. Res.(JAIR), 11:\n131-167, 1999.\n\nB. Fr\u00e9nay and A. Kaban. A comprehensive introduction to label noise. In European Symposium on\nArtificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), 2014.\n\nB. Fr\u00e9nay and M. Verleysen. Classification in the presence of label noise: a survey. IEEE Trans. on\nNeural Networks and Learning Systems, 25(5):845-869, 2014.\n\nY. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In Advances in\nNeural Information Processing Systems (NIPS), 2005.\n\nGE. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. In NJPS Deep\nLearning and Representation Learning Workshop, 2014.\n\nB. Jakramate and A. Kaban. Label-noise robust logistic regression and its applications. In Machine\nLearning and Knowledge Discovery in Databases, pp. 143-158. 2012."}]
HJStZKqel
[{"section_index": "0", "section_name": "LIFELONG PERCEPTUAL PROGRAMMING By\nEXAMPLE", "section_text": "Alexander L. Gaunt, Marc Brockschmidt, Nate Kushman, Daniel Tarlov\nMicrosoft Research"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A goal of artificial intelligence is to build a single large neural network model that can be trained in\na lifelong learning setting; i.e., on a sequence of diverse tasks over a long period of time, and gain\ncumulative knowledge about different domains as it is presented with new tasks. The hope is that such\nsystems will learn more accurately and from less data than existing systems, and that they will exhibit\nmore flexible intelligence. However, despite some work showing promise towards multitask learning\n(training on many tasks at once) and transfer learning (using source tasks to improve learning in a\nlater target task) (Caruana 1997} Luong et al} 2015} Parisotto et al.||2015}|Rusu et al. 2016), most\nsuccesses of neural networks today come from training a single network on a single task, indicating\n\nthat this goal is highly challenging to achieve.\nWe argue for two properties that such systems should have in addition to the ability to learn from <\nsequence of diverse tasks. First is the ability to learn from weak supervision. Gathering high-quality\nlabeled datasets is expensive, and this effort is multiplied if all tasks require strong labelling. In\nthis work, we focus on weak supervision in the form of pairs of input-output examples that come\nfrom executing simple programs with no labelling of intermediate states. Second is the ability tc\ndistill knowledge into subcomponents that can be shared across tasks. If we can learn models where\nthe knowledge about shared subcomponents is disentangled from task-specific knowledge, then the\nsharing of knowledge across tasks will likely be more effective. Further, by isolating shared subcom\nponents, we expect that we could develop systems that exhibit reverse transfer (i.e., performance or\nearlier tasks automatically improves by improving the shared components in later tasks).\nA key challenge in achieving these goals with neural models is the difficulty in interpreting weights\ninside a trained network. Most notably, with a purely neural model, subcomponents of knowledge\ngained after training on one task cannot be easily transferred to related tasks. Conversely, traditional\ncomputer programs naturally structure solutions to diverse problems in an interpretable, modular\nform allowing (1) re-use of subroutines in solutions to new tasks and (2) modification or errot\ncorrection by humans. Inspired by this fact, we develop end-to-end trainable models that structure\ntheir solutions as a library of functions, some of which are represented as source code, and some of\nwhich are neural networks.\nMethodologically, we start from recent work on programming by example (PBE) with differentiable\ninterpreters, which shows that it is possible to use gradient descent to induce source code operating\non basic data types (e.g. integers) from input-output examples {Gaunt et al] 2076} Riedel et al.|{2016\n. In this work we combine these differentiable interpreters with neural networl\n\nclassifiers in an end-to-end trainable system that learns programs that manipulate perceptual data\nWe introduce and develop solutions for the problem of Lifelong Perceptual Pro-\ngramming By Example (LPPBE). The problem is to induce a series of programs\nhat require understanding perceptual data like images or text. LPPBE systems\nearn from weak supervision (input-output examples) and incrementally construct\n1 shared library of components that grows and improves as more tasks are solved\nMethodologically, we extend differentiable interpreters to operate on perceptual\nlata and to share components across tasks. Empirically we show that this leads to\n1 lifelong learning system that transfers knowledge to new tasks more effectively\nhan baselines, and the performance on earlier tasks continues to improve even as\nhe system learns on new, different tasks.\nFigure 1: (NEURAL) TERPRET programs for counting symbols on a tape, with input-output examples.\nBoth programs describe an interpreter with instructions to MOVE on the tape and READ the tape\naccording to source code parametrized by instr. (left) A TERPRET program that counts \u20191\u2019s.\n(right) A NEURAL TERPRET program that additionally learns a classifier is_dinosaur."}, {"section_index": "2", "section_name": "2.1 TERPRET", "section_text": "TERPRET programs describe differentiable interpreters by defining the relationship between Inputs\nand Outputs via a set of inferrable Params that define an executable program and Vars that store\nintermediate results. TERPRET requires all of these variables to be finite integers. To learn using\ngradient descent, the model is made differentiable by a compilation step that lifts the relationships\nIn addition, we make our interpreter modular, which allows lifelong learning on a sequence of re-\nlated tasks: rather than inducing one fresh program per task, the system is able to incrementally\nbuild a library of (neural) functions that are shared across task-specific programs. To encapsulate\nthe challenges embodied in this problem formulation, we name the problem Lifelong Perceptual\nProgramming By Example (LPPBE). Our extension of differentiable interpreters that allows per-\nceptual data types, neural network function definitions, and lifelong learning is called NEURAL\nTERPRET (NTPT).\nEmpirically, we show that a NTPT-based model learns to perform a sequence of tasks based on\nimages of digits and mathematical operators. In early tasks, the model learns the concepts of digits and\nmathematical operators from a variety of weak supervision, then in a later task it learns to compute\nthe results of variable-length mathematical expressions. The approach is resilient to catastrophic\nforgetting (1990); on the contrary, results show that performance\ncontinues to improve on earlier tasks even when only training on later tasks. In total, the result is a\nmethod that can gather knowledge from a variety of weak supervision, distill it into a cumulative.\nre-usable library, and use the library within induced algorithms to exhibit strong generalization.\nWe briefly review the TERPRET language for constructing differentiable in-\n\nterpreters. To address LPPBE, we develop NEURAL THERE, an extension to support lifelong\nlearning, perceptual data types, and neural network classifiers. We also define our tasks.\nety\n[{o]\n\n2/0]\n\nry\nBR\n\n\u201c\n\n/\n4\n@)3\n\nBis 14 MGB ;\n\ni. 6C DREGE\u00bb\nFigure 2: Overview of tasks in the (a) ADD2X2, (b) APPLY2X2 and (c) MATH scenarios. \u2018A\u2019 denotes\nthe APPLY operator which replaces the ? tiles with the selected operators and executes the sum. We\nshow two MATH examples of different length.\nbetween integers specified by the TERPRET code to relationships between marginal distributions\nover integers in finite ranges. There are two key operations in this compilation process:\nThis compilation process yields a TensorFlow (Abadi et al.||2016) computation graph containing\nmany of these two operations, which can then be trained using standard methods."}, {"section_index": "3", "section_name": "2.2 NEURAL TERPRET", "section_text": "To handle perceptual data, we relax the restriction that all variables need to be finite integers. We intro:\nduce a new tensor type whose dimensions are fixed at declaration, and which is suitable to store per\nceptual data. Additionally, we introduce learnable functions that can process vector variables. A learn:\nable function is declared using @Learn([d1,...,dp], dou, hid-sizes=[,...,\u00a21])\nwhere the first component specifies the dimensions d,,...,dp of the inputs (which can be finite\nintegers or tensors) and the second the dimension of the output. NTPT compiles such functions into\na fully-connected feed-forward neural network whose layout can be controlled by the hid_sizes\ncomponent, which specifies the number of layers and neurons in each layer. The inputs of the function\nare simply concatenated. Vector output is generated by learning a mapping from the last hidden layer\nand finite integer output is generated by a softmax layer producing a distribution over integers up to\nthe declared bound. Learnable parameters for the generated network are shared across every use in\nthe NTPT program, and as they naturally fit into the computation graph for the remaining TERPRET\nprogram, the whole system is trained end-to-end.\nA simple TERPRET program counting bits on a tape, and a related NTPT program that counts up\nimages of a particular class on a tape are displayed in Fig.[T]\nTo demonstrate the benefits of our approach for combining neural networks with program-like archi-\ntecture, we consider three toy scenarios consisting of several related tasks depicted in Fig.|2]\nADD2xX2 scenario: The first scenario in Fig. 2{a) uses of a 2 x 2 grid of MNIST digits. We set 4\ntasks based on this grid: compute the sum of the digits in the (1) top row, (2) left column, (3) bottorr\nrow, (4) right column. All tasks require classification of MNIST digits, but need different programs\nto compute the result. As training examples, we supply only a grid and the resulting sum. Thus, we\nnever directly label an MNIST digit with its class.\nAPPLY2X2 scenario: The second scenario in Fig. 0) presents a 2 x 2 grid of of handwritten\narithmetic operators. Providing three auxiliary random integers dj), do, ds, we again set 4 tasks\nB+\n\nof de\nGW\nbe\n\nfoe)\n\nray\n\u00ae\n\n(\n\nb)\n4\n@)3\n\n(c)\nBis 14 ieVE] 7\n[\u2014 |{+ je 3\n\nIEA:\ne Function application. The statement z.set_to(foo(x, y)) is translated into 7 =\n> je igh hy Hi, where 1\u00b0 represents the marginal distribution for the variable a and I is\nan indicator tensor 1{i = f00(j,k)]. This approach extends to all functions mapping any\nnumber of integer arguments to an integer output.\n\ne Conditional statements The statements if x == 0: z.set_to(a); elif x ==\n1: z.set_to (b) are translated to u* = pz \"+1? yu\". More complex statements follow\na similar pattern, with details given in|Gaunt et al.]\nFigure 3: Example solutions for the tasks on the right columns of the (a) ADD2X2 and (b) APPLY2x2\nscenarios. The read head is initialized READing the top left cell and any auxiliary Input Ints are\nloaded into memory. Instructions and arguments shown in black must be learned.\nbased on this grid, namely to evaluate the expressior\u2019|d; op; d2 op2 dg where (opj, opg) are\nthe operators represented in the (1) top row, (2) left column, (3) bottom row, (4) right column. In\ncomparison to the first scenario, the dataset of operators is relatively small and consistent!\nthe perceptual task of classifying operators considerably easier. However, the algorithmic part is\nmore difficult, requiring non-linear operations on the supplied integers.\nMATH scenario: The final task in Fig. 2{c) requires combination of the knowledge gained from\nthe weakly labeled data in the first two scenarios to execute a handwritten arithmetic expression."}, {"section_index": "4", "section_name": "3. MODELS", "section_text": "We design one NTPT model for each of the three scenarios outlined above. Knowledge transfer i:\nachieved by defining a library of 2 neural networks shared across all tasks and scenarios. Training\non each task should produce a task-specific source code solution (from scratch) and improve the\noverall usefulness of the shared networks. Below we outline the details of the specific models fo:\neach scenario along with baseline models."}, {"section_index": "5", "section_name": "3.2 ADD2x2 MODEL", "section_text": "For the ADD2X2 scenario we build a model capable of writing short straight line algorithms with up\nto 4 instructions. The model consists of a read head containing net _0 and net _1 (with the exception\nof the very first task, which only has access to net _0, as discussed above) which are connected to a\nset of registers each capable of holding integers in the range 0,..., , where M/ = 18. The head is\ninitialized reading the top left cell of the 2 x 2 grid, and at each step in the program, one instruction\ncan be executed from the following instruction set:\ne NOOP: a trivial no-operation instruction\nWe refer to the 2 networks in the shared library as net _0 and net_1. Both networks have similar\narchitectures: they take a 28 x 28 monochrome image as input and pass this sequentially through\ntwo fully connected layers each with 256 neurons and ReLU activations. The last hidden vector is\npassed through a fully connected layer and a softmax to produce a 10 dimensional output (net _0)\nor 4 dimensional output (net _1) to feed to the differentiable interpreter. Note that the output sizes\nare chosen to match the number of classes of MNIST digits and arithmetic operators respectively.\nIf we create an interpreter model which is allowed to make calls to N untrained networks, and part of\nthe interpreter uses a parameter net_choice = Param/(N) to deciding which network to apply,\nthen the system effectively sees one large untrained network, which cannot usefully be split apart into\nthe N components after training. To avoid this, we enforce that no more than one untrained network\nis introduced at a time (i.e. the first task has access to only net _0, and all other tasks have access to\nboth nets). We find that this breaks the symmetry sufficiently to learn separate, useful classifiers.\n(a)\n\nAPPLY(\n\nGOTO_IF\n\nhalt\n\nveturnoddr\nreturn\n\n(b)\n\non\n\nLe:\n\nMOVE\nRO = READ(net_@)\n\nGOTO_IF 7\n\nUi:\nRl = APPLY(R1, R@, R2)\n\nGOTO_IF _\n\n12:\nMOVE\n\nR2 = READ(net_1)\nGOTO_IF Le\n\nhalt\n\nreturn R1\nFigure 4: Overview of the MATH model. (a) The general form of a block in the model. Blue element\nare learnable. (b) A loop-based solution to the task in the MATH scenario."}, {"section_index": "6", "section_name": "3.3. APPLY2X2 MODEL", "section_text": "We adapt the ADD2X2 model to the APPLY2X2 scenario by initializing three immutable registers\nwith the auxiliary integers supplied with each 2 x 2 operator grid [see Fig. [2[b)]. In addition, we swap\nthe ADD (-,-) instruction for APPLY (-,-,-). The action of APPLY (a, b, op) is to interpret the\ninteger stored at op as an arithmetic operator and to compute a op b. All operations are performed\nmodulo (M + 1) and division by zero returns M. In total, this model exposes a program space of\nsize ~ 101? syntactically distinct programs."}, {"section_index": "7", "section_name": "3.4 MATH MODEL", "section_text": "We design the final scenario to investigate the synthesis of more complex control flow than straight\nline code. A natural solution to execute the expression on the tape is to build a loop with a body that\n\nalternates between moving the head and applying the operators [see Fig. 4[>)1. This loopy solution\nhas the advantage that it generalizes to handle arbitrary length arithmetic expressions."}, {"section_index": "8", "section_name": "4 BASELINES", "section_text": "e MOVE_NORTH, MOVE_EAST, MOVE_SOUTH, MOVE_WEST: translate the head (if po:\nsible) and return the result of applying the neural network chosen by net _choice to th\nimage in the new cell\n\ne ADD(-.-): accepts two register addresses and returns the sum of their contents.\nwhere the parameter net __choice is to be learned and decides which of net _0 and net_1 to apply\nTo construct each line of code requires choosing an instruction and (in the case of SUM) addresses of\narguments for that instruction. We follow|Feser et al.|{2016) and allow each line to store its result in\na separate immutable register. Finally, we learn a parameter specifying which register to return after\nexecution of the program. An example program in this model is shown in Fig. Ja). Even this simple\n\nmodel permits ~ 10\u201d syntactically distinct programs for the differentiable interpreter to search over\nFig. Ala) shows the basic architecture of the interpreter used in this scenario. We provide a set of\nblocks each containing the instruction MOVE or APPLY. A MOVE instruction increments the position\nof the head and loads the new symbol into a block specific immutable register using either net _0\nor net_1 as determined by a block specific net_choice. After executing the instruction, the\ninterpreter executes a GOTO_IF statement which checks whether the head is over the end of the tape\nand if not then it passes control to the block specified by got o_addr, otherwise control passes to\na halt block which returns a chosen register value and exits the program. This model describes a\nspace of ~ 10\u00b0 syntactically distinct programs.\nNTPT aims to combine neural networks and differentiable interpreters for handling perceptual and\nalgorithmic parts of a task respectively. A natural baseline is to replace the differentiable interpreter\nwith a neural network to create a purely neural solution. In this spirit we define a column as the\nfollowing architecture for handling the 2 x 2 tasks (see Fig. 5[a)):\n(c) MTN, (4) NTPT\n\n{a) indep. (b) PNN\nTA\n\n19 RO= RE] | Ro = In\nR1= Moi} | Ri = In\nus R2= Su} |r2 = In\n128) R3= Nod |r3 = mo\nR4= Nod |ra = mo\n128)\n\nRS = AP!\n\nLibrary\n\nESE)\nFigure 5: Cartoon illustration of all models used in the experiments. See text for detail:\nWe construct 3 different neural baselines derived from this column architecture (see Fig. [5)\nFor the MATH task, we build a purely neural baseline by replacing the task-specific part of the\nMTNN network with an LSTM. At each step, this network takes in the shared embeddings of the\ncurrent symbol, updates an LSTM hidden state and then proceeds to the next symbol. We make a\nclassification of the final answer using the last hidden states of the LSTM. We find that we achieve\nbest performance with a 3 layer LSTM with 1024 elements in each hidden state and dropout between\nlavers. In addition. we investigate a Neural GPU baseline based on|Kaiser & Sutskeverld20161\u00b0\ne Each of the images in the 2 x 2 grid is passed through an embedding network with 2 layers\nof 256 neurons (c.f. net_0/1) to produce a 10-dimensional embedding. The weights of\nthe embedding network are shared across all 4 images.\n\ne These 4 embeddings are concatenated into a 40-dimensional vector and for the APPLY2x2\n\nthe auxiliary integers are represented as one-hot vectors and concatenated with this 40-\ndimensional vector.\n\ne This is then passed through a network consisting of 3 hidden layers of 128 neurons to\nproduce a 19-dimensional output\n1. Indep.: Each task is handled by an independent column with no mechanism for transfer.\n\n2. Progressive Neural Network (PNN): We follow|Rusu et al.|(2016) and build lateral con-\nnections linking each task specific column to columns from tasks appearing earlier in the\nlearning lifetime. Weights in all columns except the active task\u2019s column are frozen during\na training update. Note that the number of layers in each column must be identical to allow\nlateral connections, meaning we cannot tune the architecture separately for each task.\n\n3. Multitask neural network (MTNN): We split the column into a shared perceptual part and\na task specific part. The perceptual part consists of net _0 and net_1 embedding networks.\nIn an ideal case the symmetry between these embedding networks will be broken and one\nwill become specialized to handle handwritten digits while the other will handle handwritten\noperators. In order to encourage this symmetry breaking we zero out one of the networks\n\nwhen training on the first task (cf. the symmetry breaking technique mentioned in Sec.[3.1)\n\nThe task-specific part consists of a neural network that maps the perceptual embeddings\nto a 19 dimensional output. Note that unlike PNNs, the precise architecture of the task\nspecific part of the MTNN can be tuned for each individual task. We consider two MTNN\narchitectures:\n\n(a) MTNN-1: All task-specific parts are 3 layer networks comparable to the PNN case.\n(b) MTNN- 2: We manually t tune e the number of layers for each task and find best perfor-\n\nne 7\nprobability\n\naccuracy\n\no oF\nuno\n\nro\no\u00b0\n\n\u00b0\nua\n\nADD2x2: top row\nADD2x2: left column\nADD2x2: bottom row\nADD2x2: right column\nAPPLY2x2 tasks\n\n0 128 256 384 512 0 128 256384 512\n\ntraining example (1000s) training example (1000s)\n\n(a) (b)\n\nADD2x2:left\n\nAPPLY2x2:left\n\naccuracy\n\naccuracy\n\nb\n\u00b0\n\n\u00b0\nin\n\nroo\noo\n\n\u00b0\nin\n\n\u00b0\n\u00b0\n\nTT\n\n128256 +\u00ab384~=\u00ab*OS 2\ntraining example (1000s)\n\n(c)\nFirst we create a data set in a regime which best demonstrates the LPPBE problem. The most\nconvincing demonstration of LPPBE requires a series of tasks for which there is insufficient data to\nlearn independent solutions to all tasks and instead, success requires transferring knowledge from\none task to the next. Empirically, we find that training on any individual ADD2xX2 task with only\n\n1k distinct 2 x 2 examples produces low accuracies of around 40 4\ntest set of 10k examples) for both the purely neural baselines and\n\nt 20% (measured on a held-out\nNTPT methods. Since none of\n\nour models can satisfactorily solve an ADD2X2 task independently in this regime, we work with\nthis limited data set and argue that any success on these tasks during a lifetime of learning can be\n\nattributed to successful knowledge transfer. In addition, we check\n\nthat in a data rich regime (e.g.\n\n>4k examples) all of the baseline models and NTPT can independently solve each task with >80%\naccuracy. This indicates that the models all have sufficient capacity to represent satisfactory solutions,\n\nand the challenge is to find these solutions during training."}, {"section_index": "9", "section_name": "5.1 LIFELONG LEARNING", "section_text": "Reverse transfer: Fig. [6{a) focuses on the performance of NTPT on the first task (ADD2X2:top)\nThe red bars indicate times where the the system was presented with an example from this task\nNote that even when we have stopped presenting examples, the performance on this task continues\nto increase as we train on later tasks - an example of reverse transfer. We verify that this is due tc\ncontinuous improvement of net _0 in later tasks by observing that the accuracy on the ADD2X2:top\ntask closely tracks measurements of the accuracy of net_0 directly on the digit classification task.\nAvoidance of catastrophic forgetting: Fig. [6{b) shows\nthe performance of the NTPT on the remaining ADD2x2\ntasks. Both Fig.[6{a) and (b) include results for the MTNN-\n2 baseline (the best baseline for the ADD2X2 tasks). Note\nthat whenever the dominant training task swaps from an\nADD2X2 task to an APPLY2X2 task the baseline\u2019s perfor.\nmance on ADD2xX2 tasks drops. This is because the shared\nperceptual network becomes corrupted by the change in\ntask - an example of catastrophic forgetting. To try to limit\nFigure 6: Lifelong learning with NTPT. (a) top: the sequential learning schedule for all 8 tasks,\nbottom: performance of NTPT (solid) and the MTNN-2 baseline (dashed) on the first ADD2X2 task.\n(b) performance on the remaining ADD2X2 tasks. (c) Performance of all the baselines on the *:left\ntasks.\nTo test knowledge transfer between tasks we train on batches of data drawn from a time-evolving prob-\nability distribution over all 8 tasks in the ADD2X2 and APPLY2X2 scenarios (see the top of Fig.|6[a))\nDuring training, we observe the following key properties of the knowledge transfer achieved by\nNTPT:\ntask indep PNN MTNN-1 MTNN-2. NTPT\nSs} wp 35% 35% \u2014 26% 2% = 87%\n& left 32% \u00a9 36% \u00a9 38% 4T% 87%\n& bottom 34% 33% 40% 56% 86%\nS tight 32% 35% \u00a9 44% 60% 86%\nSL top 38% \u00a9 39% = 40% 38% \u2014 98%\nQ left 39% \u00ab51% = 41% 39% 100%\nE bottom 39% 48% 41% 40% 100%\n& right 39% 51% 42% 37% 100%\nFigure 7: Final accuracies on all 2 x 2 tasks\nfor all models at the end of lifelong learning\nFinal performance: Fig.|6{b) focuses on the ADD2X2:left and ADD2X2:left tasks to illustrate the\nrelative performance of the baselines described in Sec.[4] Note that although PNNs avoid catastrophic\nforgetting, there is no clear overall winner between the MTNN and PNN baselines. NTPT learns\nfaster and to a higher accuracy than all baselines for all the tasks considered here. For clarity we only\nplot results for the *:left tasks: the other tasks show similar behavior and the accuracies for all tasks\nat the end of the lifetime of learning are presented in Fig.|7]"}, {"section_index": "10", "section_name": "5.2 GENERALIZATION", "section_text": "In the final experiment we take net _0/1 from\nthe end of the NTPT 2 x 2 training and start\ntraining on the MATH scenario. For the NTPT\nmodel we train on arithmetic expressions con.\ntaining only 2 digits. The loopy structure of the\nMATH model introduces many local optima into\nthe optimization landscape and only 2/100 ran.\ndom restarts converge on a correct program. We\ndetect convergence to the correct program by\na rapid increase in the accuracy on a valida.\ntion set (typically occurring after around 30k\ntraining examples). Once the correct program\nis found, continuing to train the model model\nmainly leads to further improvement in the ac-\ncuracy of net_0, which saturates at 97.5% on\nthe digit classification task. The learned source\ncode generalizes perfectly to expressions contai\n\nthe narfasemanera an lana avrnraccanc pam~ac fern\nTo pick a strong baseline for the MATH problem, we first perform a preliminary experiment with\ntwo simplifications from the case above: (1) rather than expecting strong generalization from just\n2-digit training examples, we train candidate baselines with supervision on examples up to 5 digits in\nlength, and (2) we remove the perceptual component of the task, presenting the digits and operators\nas one-hot vectors rather than images. Fig.[8[a) shows the generalization performance of the LSTM\nand Neural GPU (512-filter) baselines in this simpler setting after training to convergenc\u00a2}] Based\non these results, we restrict attention to the LSTM baseline and return to the full task including the\nperceptual component. In the full MATH task, we initialize the embedding networks of each model\nusing net_0/1 from the end of the NTPT 2 x 2 training. Fig. [[b) shows generalization of the\nNTPT and LSTM models on expressions of up to 16 digits after training to convergence. We find\nthat even though the LSTM shows surprisingly effective generalization when supplied supervision\nup to 5 digits, NTPT trained on only 2-digit expressions still offers better results.\nLifelong Machine Learning. We operate in the paradigm of Lifelong Machine Learning (LML)\n(Thrun| 1994} 1995} Thrun & O\u2019Sullivan] 1996} Silver et al.| 2013} Chen et al.| 2015), where a learner\nis presented a sequence of different tasks and the aim is to retain and re-use knowledge from earliet\ntasks to more efficiently and effectively learn new tasks. This is distinct from related paradigms\nof multitask learning (presentation of a finite set of tasks simultaneously rather than in sequence\n(transfer of knowledge from a source to target domain without notion of knowledge retention (Pan &\n[2010)), and curriculum learning (training a single model for a single task of varying difficulty\n\n(Bengio et al.||2009)).\n4Note that find similarly poor generalization performance for a Neural GPU applied to\nthe similar task of evaluating arithmetic expressions involving binary numbers.\nthe extent of catastrophic forgetting and make the shared components more robust, we have a separate\nlearning rate for the perceptual networks in both the MTNN baseline and NTPT which is 100 fold\nsmaller than the learning rate for the task-specific parts. With this balance of learning rates we find\nempirically that NIPT does not display catastrophic forgetting.\noy\n\naccuracy (%)\n\n=\n\naccuracy (%)\n\n100\n\n50\n\n100\n\n90\n\n80\n\n100\n92.8\n\u2014\u2014neural GPU (43.8M)\ni\u2014\u2014.sT\u2122 (21.1M) 25.0\nTerpreT (32)\nISTH - 2digit|\n\u2014\u2014IST\u2122 - Sdigit|\n\u2014\u2014nrPr - 2digit|\n87.1\n82.8\n5 10 15\n\ndigits in expression\nFigure 8: Generalization behavior on MATH expres-\nsions. Solid dots indicate expression lengths used\nin training. We show results on (a) a simpler non-\nperceptual MATH task (numbers in parentheses indicate\nparameter count in each model) and (b) the MATH task\nincluding perception.\nThe challenge for LML with neural networks is the problem of catastrophic forgetting: if the dis\ntribution of examples changes during training, then neural networks are prone to forget knowledge\ngathered from early examples. Solutions to this problem involve instantiating a knowledge repository\n(KR) either directly storing data from earlier tasks or storing (sub)networks trained on the earlier tasks\nwith their weights frozen. This knowledge base allows either (1) rehearsal on historical examples\nins 1995p, (2) rehearsal on virtual examples generated by the frozen networks (Silver & Mercer\n2002} |Silver & Poirier||2006) or (3) creation of new networks containing frozen sub networks from\n\nthe historical tasks (Rusu et al.|{2016}|Shultz & Rivest} |2001)\nTo frame our approach in these terms, our KR contains partially-trained neural network classifier:\nwhich we call from learned source code. Crucially, we never freeze the weights of the networks it\nthe KR: all parts of the KR can be updated during the training of all tasks - this allows us to improve\nperformance on earlier tasks by continuing training on later tasks (so-called reverse transfer). Revers\u00ab\ntransfer has been demonstrated previously in systems which assume that each task can be solved by\nmodel parametrized by an (uninterpretable) task-specific linear combination of shared basis weight:\n\n(Ruvolo & Eaton||2013). The representation of task-specific knowledge as source code, learning fron\n\nweak supervision, and shared knowledge as a deep neural networks distinguishes this work from th\u00ab\n\nlinear model used in|Ruvolo & Eaton] (2\nNeural Networks Learning Algorithms. Recently, extensions of neural networks with primitives\nsuch as memory and discrete computation units have been studied to learn algorithms from input-\noutput data (Graves et al Grefenstette et al.\n|Kurach et al.|/2015| Bunel et al.||2016]\n\nAndry cz & Kurach||2016}/Zaremba et al. Riedel et al.|/2016|\n\nchow1\n\n-\\/2016}|Fes : . Whereas many of these works use a neural network controller manag-\n\ning a differentiable computer architecture, we flip this relationship. In our approach, a differentiable\nInternreter that ic exnreccihle ac caurce cade and makes calle tq nenral netwark camnonentc\nThe methods above, with the exception of|Reed & de Freitas|(2016) and|Graves et al. (2016), operate\non inputs of (arrays of) integers. However, [Reed & de Freitas] (2016) requires extremely strong\nsupervision, where the learner is shown all intermediate steps to solving a problem; our learner only\nobserves input-output examples./Reed & de Freitas|(2016) also show the performance of their system\nin a multitask setting. In some cases, additional tasks harm performance of their eT and they\nfreeze parts of their model when adding to their library of functions. Only(Bunel et al, 2016). Riede|\nfet al.| (2016) and|Gaunt et al.|(2016) aim to consume and produce source code that can be provided\n\nby a human (e.g. as sketch of a solution) to or returned to a human (to potentially provide feedback).\n7T T)ISCIISSION\nWe have presented NEURAL TERPRET, a framework for building end-to-end trainable models that\nstructure their solution as a library of functions represented as source code or neural networks\nExperimental results show that these models can successfully be trained in a lifelong learning context\nand they are resistant to catastrophic forgetting; in fact, they show that even after instances of earliet\ntasks are no longer presented to the model, performance still continues to improve.\nLearning neural network models within differentiable interpreters has several benefits. First, learning\nprograms imposes a bias that favors learning models that exhibit strong generalization, as illus\ntrated by many works on program-like neural networks. Second, the source code components ar\u00a2\ninterpretable by humans, allowing incorporation of domain knowledge describing the shape of the\nproblem through the source code structure. Third, source code components can be inspected, anc\nthe neural network components can be queried with specific instances to inspect whether the sharec\nclassifiers have learned the expected mappings. A final benefit is that the differentiable interprete:\ncan be seen as focusing the supervision. If a component is un-needed for a given task, then the\ndifferentiable interpreter can choose not to use the component, which shuts off any gradients from\nflowing to the component. We speculate that this could be a reason for the models being resistant tc\ncatastrophic forgetting, as the model either chooses to use a classifier, or ignores it (which leaves the\ncomponent unchanged).\nIt is known that differentiable interpreters are difficult to train (Kurach et al. 2015} Neelakantan et al.\n2016 2016), and being dependent on differentiable interpreters is the primary limitatior\n\nof this work. However, if progress can be made on more robust training of differentiable interpreter:\n\n(perhaps extending ideas in (2016); |Feser et al.|(2016)), then we believe there tc\n\nbe great promise in using the models we have presented here to build large lifelong neural networks"}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg \u00a7\nCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine\nlearning on heterogeneous distributed systems. arXiv preprint arXiv: 1603.04467, 2016.\nRich Caruana. Multitask learning. Machine Learning, 28:41\u201475, 1997.\nJohn K. Feser, Mare Brockschmidt, Alexander L. Gaunt, and Daniel Tarlow. Neural functional\nprogramming. 2016. Submitted to ICLR 2017.\nAlexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan\nTaylor, and Daniel Tarlow. Terpret: A probabilistic programming language for program induction\nCoRR, abs/1608.04428, 2016. URL http: //arxiv.org/abs/1608.04428\nEdward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to\ntransduce with unbounded memory. In Proceedings of the 28th Conference on Advances in Neural\nInformation Processing Systems (NIPS), pp. 1828-1836, 2015.\nMichael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The\nsequential learning problem. Psychology of learning and motivation, 24:109-165, 1989.\nAlex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-\nBarwinska, Sergio G6mez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou,\net al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016.\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recur-\nrent nets. In Advances in Neural Information Processing Systems 2, [NIPS Conference, Denver,\nColorado, USA, November 27-30, 1989], pp. 190-198, 2015.\nArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent pro-\ngrams with gradient descent. In Proceedings of the 4th International Conference on Learning\nRepresentations 2016, 2016.\nEric Price, Wojciech Zaremba, and Ilya Sutskever. Extensions and limitations of the neural gpu\n2016. Submitted to ICLR 2017.\nRoger Ratcliff. Connectionist models of recognition memory: constraints imposed by learning anc\nforgetting functions. Psychological review, 97(2):285, 1990.\nscott E. Reed and Nando de Freitas. Neural programmer-interpreters. 2016\nSebastian Riedel, Matko Bosnjak, and Tim Rocktischel. Programming with a differentiable forth\n\ninterpreter. CoRR, abs/1605.06640, 2016. URL http: //arxiv.org/abs/1605.06640\nAnthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2)\n123-146, 1995.\nDaniel L Silver and Ryan Poirier. Machine life-long learning with csmtl networks. In AAAI, 2006\nDaniel L Silver, Qiang Yang, and Lianghao Li. Lifelong machine learning systems: Beyond learnin;\nalgorithms. In AAAI Spring Symposium: Lifelong Machine Learning, pp. 49\u201455, 2013.\nSebastian Thrun. Is learning the n-th thing any easier than learning the first? In Advances in Neura\nInformation Processing Systems 8 (NIPS), pp. 640-646, 1995.\nSinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge\nand data engineering, 22(10):1345\u20141359, 2010.\nPhomas R Shultz and Francois Rivest. Knowledge-based cascade-correlation: Using knowledge to\nspeed learning. Connection Science, 13(1):43-72, 2001.\nDaniel L Silver and Robert E Mercer. The task rehearsal method of life-long learning: Overcom-\ning impoverished data. In Conference of the Canadian Society for Computational Studies of\nIntelligence, pp. 90-101. Springer, 2002.\nSebastian Thrun. A lifelong learning perspective for mobile robot control. In Proceedings o\nIF EE/RS]7 Internatinngal Cantferenre nn Intellioont Pahnte and Suctome (IRONS) nn 9220 1004"}]
HkYhZDqxg
[{"section_index": "0", "section_name": "TREE-STRUCTURED DECODING\nRECURRENT NEURAL NETWORKS", "section_text": "David Alvarez-Melis & Tommi S. Jaakkola\nComputer Science and Artificial Intelligence Lab\nMIT\n{davidam, tommi}@csail.mit.edv\nWe propose a neural network architecture for generating tree-structured object:\nfrom encoded representations. The core of the method is a doubly recurrent neu.\nral network model comprised of separate width and depth recurrences that are\ncombined inside each cell (node) to generate an output. The topology of the tree\nis modeled explicitly together with the content. That is, in response to an encodec\nvector representation, co-evolving recurrences are used to realize the associatec\ntree and the labels for the nodes in the tree. We test this architecture in an encoder:\ndecoder framework, where we train a network to encode a sentence as a vector\nand then generate a tree structure from it. The experimental results show the ef.\nfectiveness of this architecture at recovering latent tree structure in sequences anc\nat mapping sentences to simple functional programs."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recurrent neural networks have become extremely popular for modeling structured data. Key t\ntheir success is their ability to learn long-range temporal dependencies, their flexibility, and ease o\ncustomization. These architectures are naturally suited for modeling sequences since the underlyin;\nstate evolution resulting from successive operations follows an inherently linear order (Wi iams &\n\n{1995} [1997). Indeed, they have been successfully adapted t\n\nlanguage modeling (Zaremba et al.|/2015), machine translation (Sutskever et al. ) and conver\nsational agents (Vinyals & Le||2015), among other applications.\nAlthough sequences arise frequently in practice, other structures such as trees or graphs do no\nnaturally conform to a linear ordering. For example, natural language sentences or associated pars\u00a2\ntrees, programs, hierarchical structures in biology, or molecules are not inherently linear structures\nWhile sentences in natural language can be modeled as if they were linear sequences, the underlying\nprocess is compositional {1892). Models that construct sentences compositionally shoulc\nderive an advantage from adopting a more appropriate inductive bias.\nThe flexibility and success of recurrent neural networks in modeling and generating sequential data\nhas prompted efforts to adapt them to non-sequential data too. Recent work has focused on the\napplication of neural architectures to hierarchical structures, albeit in we ways. wach of this\nwork has assumed that either the full tree structure is on SOLE Kise ) or at\nIn the former scenario, the network aggregates the node Se in a manner that i is herent\nwith a given tree structure while, in the latter, generation is reduced to an attachment problem, i.e.,\nsequentially deciding which pairs of nodes to join with an edge until a tree is formed.\nThe full problem of decoding with structure, i.e., generating a tree-structured object with node labels\nfrom a given vector representation, has remained largely unexplored until recently. Recent efforts tc\nadapt RNNs to this context have so far remained relatively close to their sequential counterparts. Fot\nexample, in order to capture depth and branching in the tree, one can introduce special tokens (Dons\n(2016) or use alternating RNNs coupled with external classifiers to predict branching\n\n(Zhang et al,"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this work, we propose a novel architecture tailored specifically to tree-structured decoding. At th\nheart of our approach is a doubly-recurrent (breadth and depth-wise recurrent) neural network whic!\nseparately models the flow of information between parent and children nodes, and between siblings\nEach of these relationships is modeled with a recurrent module whose hidden states are update:\nupon observing node labels. Every node in the tree receives two hidden states, which are thet\ncombined and used to predict a label for that node. Besides maintaining separate but simultaneou\nfraternal and paternal recurrences, the proposed architecture departs from previous methods in tha\nit explicitly models tree topology. Each node in the network has modules that predict, based o1\nthe cell state, whether the node is terminal, both in terms of depth and width. Decoupling thes\ndecisions from the label prediction allows for a more concise formulation, which does not requir\nartificial tokens to be added to the tree to simulate branching.\nTo summarize, the main contributions of this paper are as follow:\nRecursive Neural Networks. Recursive neural networks 2011}{Socher et al.\nwere proposed to model data with hierarchical structures, such as parsed scenes and natural language\nsentences. Though they have been most successfully applied to encoding objects when their tree-\nstructured representation is given (Socher et al. [2013), the original formulation by |Socher & Lin\n(011) also considered using them to predict the structure (edges), albeit for the case where nodes\nare given. Thus, besides their limited applicability due to their assumption of binary trees, recursive\nneural networks are not useful for fully generating trees from scratch.\nTree-structured encoders. The Tree-LSTM of 15) is a generalization of long short\nterm memory networks (Hochreiter & Schmidhuber]{1997) to tree-structured inputs. Their mode\nconstructs a sentence representation bottom-up, obtaining at every step the representation of a nod\nin the tree from those of its children. In this sense, this model can be seen as a generalization o!\nrecursive neural networks to trees with degree potentially greater than two, with the additional long.\nrange dependency modeling provided by LSTMs. They propose two methods for aggregating the\nstates of the children, depending on the type of underlying tree: N-ary trees or trees with unknowr\nand potentially unbounded branching factor. TreeLSTMs have shown promising results for compo:\nsitional encoding of structured data, though by construction they cannot be used for decoding, since\nthey operate on a given tree structure.\nTree-structured decoders. Proposed only very recently, most tree-structured decoders rely on\nstacked on intertwined RNNs, and use heuristic methods for topological decisions during genera-\ntion. Closest to our method is the Top-down Tree LSTM of 2016), which generates\na tree from an encoded representation. Their method relies on 4 independent LSTMs, which act in\nalternation\u2014as opposed to simultaneously in our approach\u2014yielding essentially a standard LSTM\nthat changes the weights it uses based on the position of the current node. In addition, their method\nWe test this novel architecture in various encoder-decoder frameworks, coupling it with sequential\nencoders to predict tree structure from encoded vector representations of sequences. The experimen-\ntal results show the effectiveness of this approach at recovering latent structure in flattened string\nrepresentations of trees (Section /4.Tp and at mapping from natural language descriptions of simple\nprograms to abstract syntax trees (Section . In addition, we show that even for sequence-to-\nsequence tasks such as machine translation, the proposed architecture exhibits desirable properties,\nsuch as invariance to structural changes and coarse-to-fine generation (Section\ne We propose a novel neural network architecture specifically tailored to tree-structured de-\ncoding, which maintains separate depth and width recurrent states and combines them to\nobtain hidden states for every node in the tree.\n\ne@ We equip this novel architecture with a mechanism to predict tree topology explicitly (as\nopposed to implicitly by adding nodes with special tokens).\n\ne We show experimentally that the proposed method is capable of recovering trees from\nencoded representations and that it outperforms state-of-the-art methods in a task consisting\nof mapping sentences to simple functional programs.\nprovides children with asymmetric parent input: \u201cyounger\u201d children receive information from the\nparent state only through the previous sibling\u2019s state. Though most of their experiments focus on\nthe case where the nodes are given, they mention how to use their method for full prediction by in-\ntroducing additional binary classifiers which predict which of the four LSTMs is to be used. Thes\u00a2\nclassifiers are trained in isolation after the main architecture has been trained. Contrary to this\napproach, our method can be trained end-to-end in only one pass, has a simpler formulation anc\nexplicitly incorporates topological prediction as part of the functioning of each neuron.\nA similar approach is proposed by [Dong & Lapata| (2016). They propose SEQ2TREE, an encoder\n\ndecoder architecture that maps sentences to tree structures. For the decoder, they rely on hierarchica\u2019\nuse of an LSTM, similar to , but in the opposite direction: working top-down from\nthe root of the tree. To decide when to change levels in the hierarchy, they augment the training trees\nwith nonterminal nodes labeled with a special token <n>, which when generated during decoding\ntrigger the branching out into a lower level in the tree. Similar to our method, they feed nodes witk\nhidden representations of their parent and sibling, but they do so by concatenating both states anc\nrunning them through a single recurrent unit, as opposed to our method, where these two sources\nof information are handled separately. A further difference is that our approach does not require\nartificial nodes with special tokens to be added to the tree, resulting in smaller trees.\nHierarchical Neural Networks for Parsing. Neural networks have also been recently introduced\n\nto the problem of natural language parsing (Chen & Manning| 2014} |Kiperwasser & Goldberg\nroblem, the task is to predict a parse tree over a given sentence. For this,\n\n(2016). In this\n\nuse recurrent neural networks as a building block, and compose them recursively\nto obtain a tree-structured encoder. Starting from the leaves (words) they predict a parse tree with a\nprojective bottom-up strategy, which sequentially updates the encoded vector representation of the\ntree and uses it to guide edge-attaching decisions. Though conceptually similar to our approach,\n\ntheir method relies on having access to the nodes of the tree (words) and only predicts its topology,\nso\u2014similar to recursive neural networks\u2014it cannot be used for a fully generative decoding."}, {"section_index": "3", "section_name": "3 DOUBLY RECURRENT NEURAL NETWORKS", "section_text": "Generating a tree-structured object from scratch using only an encoded representation poses several\ndesign challenges. First, one must decide in which order to generate the tree. If the nodes on the\ndecoder side were given (such as in parsing), it would be possible to generate a tree bottom-up from\nthese nodes (e.g. as|Kiperwasser & Goldberg|2016 do). In the setting we are interested in, however,\nnot even the nodes are known when decoding, so the natural choice is a top-down decoder, which\nstarting from an encoded representation generates the root of the tree and then recursively generates\nthe children (if any) of every node.\nThe second challenge arises from the asymmetric hierarchical nature of trees. Unlike the sequence-\nto-sequence setting where encoding and decoding can be achieved with analogous procedures, when\ndealing with tree-structured data these two involve significantly different operations. For example:\nan encoder that processes a tree bottom-up using information of a node\u2019s children to obtain its\nrepresentation cannot be simply reversed and used as a decoder, since when generating the tree\ntop-down, nodes have to be generated before their children are.\nAn additional design constraint comes from deciding what information to feed to each node. Fo:\nsequences, the choice is obvious: a node should receive information from the node preceding o1\nsucceeding it (or both), i.e. there is a one-dimensional flow of information. In trees, there is ar\nevident flow of information from parent to children (or vice-versa), but when generating nodes ir\na top-down order it seems unnatural to generate children in isolation: the label of one of them wil\nlikely influence what the states of the other children might be. For example, in the case of pars\u00a2\ntrees, generating a verb will reduce the chances of other verbs occurring in that branch.\nWith these considerations in mind, we propose an architecture tailored to tree decoding from scratch:\ntop-down, recursive and doubly-recurrent, i.e. where both the ancestral (parent-to-children) and\nfraternal (sibling-to-sibling) flows of information are modeled with recurrent modules. Thus, the\nbuilding block of a doubly recurrent neural network (DRNN) is a cell with two types of input states.\none coming from its parent, updated and passed on to its descendants, and another one received from\nits previous sibling|'| updated and passed on to the next one. We model the flow of information in\nthe two directions with separate recurrent modules.\nFormally, let T = {V, \u20ac, 7} be a connected labeled tree, where V is the set of nodes, \u20ac the set of\nedges and 4 are node labels/\u2019| Let g* and g/ be functions which apply one step of the two separate\nRNNs. For a node i \u20ac V with parent p(i) and previous sibling s(7), the ancestral and fraternal\nhidden states are updated via\noO, = softmax(Wh!?\"*)"}, {"section_index": "4", "section_name": "3.1 TOPOLOGICAL PREDICTION", "section_text": "As mentioned before, the central issue with free-form tree construction is to predict the topology\nof the tree. When constructing the tree top-down, for each node we need to decide: (i) whether i\nis a leaf node (and thus it should not produce offspring) and (ii) whether there should be additiona\nsiblings produced after it. Answering these two questions for every node allows us to construct <\ntree from scratch and eventual stop growing it.\nSequence decoders typically rely on special tokens to terminate generation (Sutskever et al.|/2014\n\nThe token is added to the vocabulary and treated as a regular word. During training, the examples ar\npadded with this token at the end of the sequence, and during testing, generation of this token signal\ntermination. These ideas has been adopted by most tree decoders Ther\nare two important downsides of using a padding strategy for topology prediction in trees. Firs!\nthe size of the tree can grow considerably. While in the sequence framework only one stoppin\ntoken is needed, a tree with n nodes might need up to O(n) padding nodes to be added. This ca\nhave important effects in training speed. The second reason is that a single stopping token selecte\ncompetitively with other tokens requires one to continually update the associated parameters i\nresponse to anv changes in the distribution over ordinary tokens so as to maintain topological contro\nhy = 9\" (hpi, Xp(i))\nhf = go (BL), Xo)\nwhere X.(j),Xpci) are the vectors representing the previous sibling\u2019s and parent\u2019s values, respec-\ntively. Once the hidden depth and width states have been updated with these observed labels, they\nare combined to obtain a predictive hidden state:\nni?) \u2014 tanh (U/n! + uh?)\nwhere US \u20ac R\"*?F and U* \u20ac R\"** are learnable parameters. This state contains combined\ninformation of the node\u2019s neighborhood in the tree, and is used to predict a label for it. In its\nsimplest form, the network could compute the output of node 7 by sampling from distribution\nIn the next section, we propose a slight modification to whereby topological information is\nincluded in the computation of cell outputs. After the node\u2019s output symbol x; has been obtained by\nsampling from 0;, the cell passes h\u00a2 to all its children and hf to the next sibling (if any), enabling\nthem to apply Eqs (0) and to realize their states. This procedure continues recursively, until\ntermination conditions (explained in the next section) cause it to halt.\nBased on these observations, we propose an alternative approach to stopping, in which topological\ndecisions are made explicitly (as opposed to implicitly, with stopping tokens). For this, we use the\npredictive hidden state of the node h\u2018?\"&\u201d) with a projection and sigmoid activation:\npt = o(u\u00ae- hl\")\npi =o(ul -nere\u201d)\n\"Unlike the \u201cancestral\u201d line, the order within sibling nodes is ambiguous. While in abstract trees it is\nssumed that the there is no such ordering, we assume that for the structures were are interested in learning\nthere is always one: either chronological (the temporal order in which the nodes were generated) or latent\n(e. g- the grammatical order of the words in a parse tree with respect to their sentence representation).\n\n2We assume throuchout that these values are given as class indicators x; \u20ac {1..... Nt.\nhp\nhf Encoder\n| \\\nM+} + neo) }\ncy\no fy Pi\n01\n\nht pe\nFigure 1: Left: A cell of the doubly-recurrent neural network corresponding to node 7 with parent 1\nand sibling s. Right: Structure-unrolled DRNN network in an encoder-decoder setting. The nodes\nare labeled in the order in which they are generated. Solid (dashed) lines indicate ancestral (fraternal.\nconnections. Crossed arrows indicate production halted by the topology modules.\nNote that these stopping strategies depart from the usual padding methods in a fundamental property:\nthe decision to stop is made before instead of in conjunction with the label prediction. The rationale\nbehind this is that the label of a node will likely be influenced not only by its context, but also by\nthe type of node (terminal or non-terminal) where it is to be assigned. This is the case in language:\nfor example, where syntactic constraints restrict the type of words that can be found in terminal\nnodes. For this purpose, we include the topological information as inputs to the label prediction\nlayer. Thus, (4) takes the form"}, {"section_index": "5", "section_name": "3.2 TRAINING DRNNS", "section_text": "We train DRNNs with (reverse) back-propagation through structure (BPTS) (Goller & Kuechle:\n{1996}. In the forward pass, node outputs are computed in a top-down fashion on the structure\nunrolled version of the network, following the natural] dependencies of the tree. We obtain errc\nsignal at the node level from the two types of prediction: label and topology. For the former, w\ncompute cross-entropy loss of 0; with respect to the true label of the node x;. For the topologic:\nvalues pi and pt we compute binary cross entropy loss with respect to gold topological indicator\nai, i \u20ac {0,1}. In the backward pass, we proceed in the reverse (bottom-up) direction, feeding int\nevery node the gradients received from child and sibling nodes and computing internally gradient\nwith respect to both topology and label prediction. Further details on the backpropagation flow ar\nprovided in the Appendix.\nNote that the way BPTS is computed implies and underlying decoupled loss functior\nThe decoupled nature of this loss allows us to weigh these two objectives differently, to emphasiz\neither topology or label prediction accuracy. Investigating the effect of this is left for future work.\nEncoder\n\nPm\n\nOo;\n\nPe\n0; = softmax(Wh!?\" + a;v\" + y;vl)\nwhere a;, 9; \u20ac {0, 1} are binary variables indicating the topological decisions and v\u201c, v/ are learn-\nable offset parameters. During training, we use gold-truth values in (7), i.e. a; = 1 if node i has\nchildren and y; = 1 if it has a succeeding sibling. During testing, these values are obtained from\npy\u201d, p! by sampling or beam-search. A schematic representation of the internal structure of a DRNN\ncell and the flow of information in a tree are shown in Figure/1|\nL(%) = Liavel x. 5% t\n> i, %i) + \u00a3'\u00b0?? (pi, Bi\n= ) (pi, Pi)\n\u201cThe traversal is always breadth-first starting from the root, but the order in which sibling nodes are visited\nmight depend on the specific problem. If the nodes of the tree have an underlying order (such as in dependency\nparse trees), it is usually desirable to preserve this order.\nN=4000\n\ngold\u2019\nFigure 2: Trees generated by the DRNN decoder trained on subset of size N of the synthetic dataset\nfor a test example with description \u201cROOT B W FJ Vv\u201d.\nAs is common with sequence generation, during training we perform teacher forcing: after predict\ning the label of a node and its corresponding loss, we replace it with its gold value, so that childre:\nand siblings receive the correct label for that node. Analogously, we obtain the probabilities p'\nand p!, compute their loss, and replace them for ground truth variables a;, yp; for all downstrean\ncomputations. Addressing this exposure bias by mixing ground truth labels with model prediction\n\nduring training (Venkatraman et al.|{2015) or by incremental hybrid losses 2016) i\ne for future work.\n\nleft as an avenu"}, {"section_index": "6", "section_name": "4.1 SYNTHETIC TREE RECOVERY", "section_text": "In our first set of experiments we evaluate the effectiveness of the proposed architecture to recover\ntrees from flattened string representations. For this, we first generate a toy dataset consisting of\nsimple labeled trees. To isolate the effect of label content from topological prediction, we take a\nsmall vocabulary consisting of the 26 letters of the English alphabet. We generate trees in a top-down\nfashion, conditioning the label and topology of every node on the state of its ancestors and siblings.\nFor simplicity, we use a Markovian assumption on these dependencies, modeling the probability of\na node\u2019s label as depending only on the label of its parent and the last sibling generated before it (if\nany). Conditioned on these two inputs, we model the label of the node as coming from a multinomial\ndistribution over the alphabet with a dirichlet prior. To generate the topology of the tree, we model\nthe probability of a node having children and a next-sibling as depending only on its label and the\ndepth of the tree. For each tree we generate a string representation by traversing it in breadth-first\npreorder, starting from the root. The labels of the nodes are concatenated into a string in the order\nin which they were visited, resulting in a string of |7| symbols. We create a dataset of 5,000 trees\nwith this procedure, and split it randomly into train, validation and test sets (with a 80%,10%, 10%\nsplit). Further details on the construction of this dataset are provided in the Appendix.\nThe task consists of learning a mapping from strings to trees, and using this learned mapping to\nrecover the tree structure of the test set examples, given only their flattened representation. To\ndo so, we use an encoder-decoder framework, where the strings are mapped to a fixed-size vecto1\nrepresentation using a recurrent neural network. For the decoder, we use a DRNN with LSTM\nmodules, which given the encoded representation generates a tree. We choose hyper-parameters\nwith cross-validation. Full training details are provided in the Appendix.\nMeasuring performance only in terms of exact recovery would likely yield near-zero accuracies for\nmost trees. Instead, we opt for a finer-grained metric of tree similarity that gives partial credit for\ncorrectly predicted subtrees. Treating tree generation as a retrieval problem, we evaluate the quality\nof the predicted tree in terms of the precision and recall of recovering nodes and edges present in\nthe gold tree. Thus, we penalize both missing and superfluous components. As baseline, we induce\na probabilistic context-free grammar (PCFG) on the full training data and use it to parse the test\nsentences. Note that unlike the DRNN, this parser has direct access to the sentence representation\nand thus its task is only to infer the tree structure on top of it, so this is indeed a strong baseline.\nFigure [3] shows the results on the test set. Training on the full data yields node and _edge retrieval\nFl-Scores of 75% and 71%, respectively, the latter considerably above the baseline] This 4% gap\ncan be explained by correct nodes being generated in the wrong part of the tree, as in the example in\n\u201cSince the PCFG parser has access to the nodes by construction, node accuracy for the baseline method is\nirrelevant and thus omitted from the anal\nFigure 3: Left: Fl-Score for models trained on randomly sampled subsets of varying size, averaged\nover 5 repetitions. Right: Node (first column) and edge (second) precision as a function of tree size.\n0.8\n\n0.6\n\n0.4\n\n0.\n\n0.0\n\nTre\n\nTrae Demin (# nodes)\n\nmm Node\nmmm Edge\n\nPrecision\n\n1.0\n\n0.\n\n0.\n\n0.\n\n0.\n\n00\n\nTree Width (# nodes)\n\nME Node\nmmm Edge\n\nLi\nFigure 4: Node and edge precision as a function of tree depth (left figure) and width (right).\nTree structures arise naturally in the context of programs. A typical compiler takes human-readable\nsource code (expressed as sequences of characters) and transforms it into an executable abstrac\nsyntax tree (AST). Source code, however, is already semi-structured. Mapping natural language\nsentences directly into executable programs is an open problem, which has received considerable\n\ninterest in the natural language processing community (Kate et al.|/2005}/Branavan et al.|[2009).\nThe IFTTT dataset (Quirk et al.| is a simple testbed for language-to-program mapping. It\nconsists of if-this-then-that programs (called recipes) crawled from the IFTTT websitq)| paired with\nnatural language descriptions of their purpose. The recipes consist of a trigger and an action, each\ndefined in terms of a channel (e.g. \u201cFacebook\u2019\u201d), a function (e.g. \u201cPost a status update\u201d) and poten-\ntially arguments and parameters. An example of a recipe and its description are shown in Figure[5]\nThe data is user-generated and extremely noisy, which makes the task significantly challenging.\nRCCTPE aS Pe ye Ae po A TUR WU Pe\n\nIF (TRIGGER) THEN (ACTION)\n\n(b) Functions\n\nAdd file from_URL\n\nYou.are.tagged.in.a_photo\n\n(c) Arguments (__Bieore Dropbox Folder Path }\n\n(b) Parameters {{linageSource}}}\n\ndf {{Facebook} }}\n\u2018aption\nFigure 5: Example recipe from the IFTTT dataset. The description (above) is a user-generated\nnatural language explanation of the if-this-then-that program (below).\n80\n5 Basline - Edge\n\u2014 Node\n- Edge\n\n65 -\n60\u00b0\n\nMacro-F1 Score\n\n55 -\n\n50\n500 1000 1500 2000 2500 3000 3500 4000\nTraining examples\n\n1.0\n\n08\n\n06\n\n0.4\n\nPrecision\n\n02\n\n0.0\n\n=\ner \u2014\ni\n\nTree Si.\n\n(j\n\n=\n23\n\n#\n2\nB\n\nn\n\n\u2014\n\ntl\nal\n\u2014\n\nmmm Node\nmmm Edge\n\no=\n\n7\n2 =\n4\nFreqsion\n\n0.8\n\n0.6\n\n0.4\n\n0.\n\n0.0\n\nTre\n\nTree Depth (# nodes)\n\nmm Node\nmmm Edge\n\nPrecision\n\n1.0\n\n0.\n\n0.\n\n0.\n\n0.\n\n00\n\nWhbaui\n\nTree Width (# nodes)\n\nME Node\nmmm Edge\n\nhk\nFigure[2| The second plot in Figure[3]|shows that although small trees are recovered more accurately,\nprecision decays slowly with tree size, with depth accounting for the largest effect (Figure]4).\nTable 1: Results on the IFTTT task. Left: non-English and unintelligible examples removed (2,262\nrecipes). Right: examples for which at least 3+ humans agree with gold (758 recipes).\nMethod Channel +Func FI\nretrieval 36.8 25.4 49.0\nphrasal 27.8 16.4 39.9\nnc 26.7 15.4 37.6\nclassifier 64.8 47.2 56.5\nposclass 67.2 50.4 57.7\nSEQ2SEQ 68.8 50.5 60.3\nSEQ2TREE 69.6 51.4 60.4\nGRu-DRNN 70.1 51.2 62.7\nLsTM-DRNN 74.9 543 65.2\nWe approach this task using an encoder-decoder framework. We use a standard RNN encoder, either\nan LSTM or a GRU (Cho et al.|/2014), to map the sentence to a vector representation, and we use\na DRNN decoder to generate the AST representation of the recipe. We use the original data split,\nwhich consists of 77,495 training, 5,171 development and 4,294 test examples. For evaluation, we\nuse the same metrics as|Quirk et al.|(2015), who note that computing exact accuracy on such a noisy\ndataset is problematic, and instead propose to evaluate the generated AST in terms of Fl-score on\nthe set of recovered productions. In addition, they compute accuracy at the channel level (i.e. when\nboth channels are predicted correctly) and at the function level (both channels and both functions\npredicted correctly)."}, {"section_index": "7", "section_name": "4.3, MACHINE TRANSLATION", "section_text": "In our last set of experiments, we offer a qualitative evaluation DRNNs in the context of machin\ntranslation. Obtaining state-of-the-art results in machine translation requires highly-optimized ar\nchitectures and large parallel corpora. This is not our goal. Instead, we investigate whether decodin;\nwith structure can bring benefits to a task traditionally approached as a sequence-to-sequence prob\nlem. For this reason, we consider a setting with limited data: a subset of the WMT14 datase\nconsisting of about 50K English + French sentence pairs (see the Appendix for details) along witl\ndependency parses of the target (English) side.\nWe train a sequence-to-tree model using an LSTM encoder and a DRNN decoder as in the previous\n\nexperiments. A slight modification here is that we distinguish left and right children in the tree\n\nusing two symmetric width-modules gh, go that produce children from the parent outwards. Witt\n\nthis, children are lexically ordered, and therefore trees can be easily and un-ambiguously projectec\nback into sentences. We compare our model against a sequence-to-sequence architecture of similar\ncomplexity (in terms of number of parameters) trained on the same data using the optimized Open.-\n\nNMT library (Klein et al.| 2017). For decoding, we use a simple best-of-k sampling scheme for ou\nmodel, and beam search for the SEQ2SEQ models.\nMethod Channel +Func FI Method Channel +Func FI\nretrieval 36.8 25.4 49.0 retrieval 43.3 32.3 56.2\nphrasal 27.8 16.4 39.9 phrasal 37.2 23.5 45.5\nsyne 26.7 15.4 37.6 syne 36.5 23.5 45.5\nclassifier 64.8 47.2 56.5 classifier 79.3 66.2 65.0\nposclass 67.2 50.4 57.7 posclass 81.4 71.0 66.5\nSEQ2SEQ 68.8 50.5 60.3 SEQ2SEQ 87.8 75.2 73.7\nSEQ2TREE 69.6 51.4 60.4 SEQ2TREE 89.7 78.4 74.2\nJRU-DRNN 70.1 51.2 62.7 GRu-DRNN 89.9 77.6 74.1\nSTM-DRNN 74.9 54.3 65.2 LsTM-DRNN 90.1 78.2 714\nMethod Channel +Func Fl\nretrieval 43.3 32.3 56.2\nphrasal 37.2 23.5 45.5\n36.5 23.5 45.5\n\n79.3 66.2 65.0\n\nposclass 81.4 71.0 66.5\nSEQ2SEQ 87.8 75.2 73.7\nSEQ2TREE 89.7 78.4 74.2\nGRu-DRNN 89.9 716 74.1\nLsTM-DRNN 90.1 78.2 T1A4\nWe compare our methods against the various extraction and phrased-based machine translation base-\nlines of Quirk et al.|(2015) and the the methods of/Dong & Lapatal 015}: SEQ2SEQ, a sequence-\nto-sequence model trained on flattened representations of the AST, and SEQ2TREE, a token-driven\nhierarchical RNN. Following these two works, we report results on two noise-filtered subsets of the\ndata: one with all non-English and unintelligible recipes removed and the other one with recipes\nfor which at least three humans agreed with the gold AST. The results are shown in Table [I] In\nboth subsets, DRNNs perform on par or above previous approaches, with LSTM-DRNN achieving\nsignificantly better results. The improvement is particularly evident in terms of Fl-score, which is\nthe only metric used by previous approaches that measures global tree reconstruction accuracy. To\nbetter understand the quality of the predicted trees beyond the function level (i.e. (b) in Figure 5),\nwe computed node accuracy on the arguments level. Our best performing model, LSTM-DRNN,\nachieves a Macro F1 score of 51% (0.71 precision, 0.40 recall) over argument nodes, which shows\nthat the model is reasonably successful at predicting structure even beyond depth three. The best\nperforming alternative model, SEQ2TREE, achieves a corresponding F1 score of 46%.\nDRNN\n(Small) HL pode Ft\nDRNN\n\n(Large) vod\n\nSeq2Seq\n(Large) + \u201cHn ~~ -1 *\nSeq2s\n\u2018Smai) + (]------ i!\n\n0 20 40 60 80 100\nLog-Likelihood relative change (%)\n\u2018igure 6: Likelihood change un- Table\u2018\nler target structural perturbation. posed\nFirst, we analyze the quality of translations as a function of the maximum allowed target sentence\n\u201csize\u201d. The notion of size for a sequence decoder is simply the length while for DRNN we us\u00a2\ndepth instead so as to tap into the inherent granularity at which sentences can be generated fron\nthis architecture. Two such examples are shown in Table|2| Since DRNN topology has been trainec\nto mimic dependency parses top-down, the decoder tends to first generate the fundamental aspect:\nof the sentence (verb, nouns), leaving less important refinements for deeper structures down in the\ntree. The sequence decoder, in contrast, is trained for left-to-right sequential generation, and thu:\nproduces less informative translations under max-length restrictions.\nin our second experiment we investigate the decoders\u2019 ability to entertain natural paraphrases o\nsentences. If we keep the semantic content of a sentence fixed and only change its grammatica\nstructure, it is desirable that the decoder would assign nearly the same likelihood to the new sentence\nOne way to assess this invariance is to compare the relative likelihood that the model assigns to thi\nzold sentence in comparison to its paraphrase. To test this, we take 50 examples from the WM1\nest split and manually generate paraphrases with various types of structural alterations (see detail\nn the Appendix). For each type of decoder, we measure the relative change (in absolute value) o\nhe log-likelihood resulting from the perturbation. All the models we compare have similar standar\u00ab\nJeviation (40 + 20) of log-likelihood scores over these examples, so the relative changes in th\nog-likelihood remain directly comparable. For each architecture we train two versions of differen\nsizes, where the sizes are balanced in terms of the number of parameters across the architectures. Thi\nesults in Figure [6] show that DRNN\u2019s exhibit significantly lower log-likelihood change, suggestin;\nhat, as language models, they are more robust to natural structural variation than their SEQ2SE\u00a2\n-ounterparts."}, {"section_index": "8", "section_name": "\u00bb DISCUSSION AND FUTURE WORK", "section_text": "We have presented doubly recurrent neural networks, a natural extension of (sequential) recurrent\narchitectures to tree-structured objects. This architecture models the information flow in a tree with\ntwo separate recurrent modules: one carrying ancestral information (received from parent and passed\non to offspring) and the other carrying fraternal information (passed from sibling to sibling). The\ntopology of the tree is modeled explicitly and separately from the label prediction, with modules\nthat given the state of a node predict whether it has children and siblings.\nThe experimental results show that the proposed method is able to predict reasonable tree structure:\nfrom encoded vector representations. Despite the simple structure of the IFTTT trees, the result\non that task suggest a promising direction of using DRNNs for generating programs or executabl.\nqueries from natural language. On the other hand, the results on the toy machine translation tas|\nshow that even when used to generate sequences, DRNN\u2019s exhibit desirable properties, such as in\nvariance over structural modifications and the ability to perform coarse-to-fine decoding. In orde\nto truly use this architecture for machine translation, the approach must be scaled by resorting t\nbatch processing in GPU. This is possible since forward and backward propagation are computec\nsequentially along tree traversal paths so that inputs and hidden states of parents and siblings can be\nsrouped into tensors and operated in batch. We leave this as an avenue for future work.\n\u201c produit diff\u00e9rentes r\u00e9ponses qui\n\n\u2018je ne sais jamais quoi\n\nSource changent avec le temps selon nos J a\nPuri \u201d dire dans ces cas 1a\u2019\nexp\u00e9riences et nos relations\nSEQ2SEQ:\nl=1 a I\nwith the different actions Ido\n\nDRNN:\nd=1\nd=2\n\nwith the different actions who change with\n\nI do not know what to say\n\nanswers\ndifferent answers change\nproduct the different answers change .\n\nknow\nbut i do not know\nbut i do not know to say\nin- Table 2: Translations at different resolutions (size constraints im-\nm. posed during decoding) for two example sentences."}, {"section_index": "9", "section_name": "ACKNOWLEDGEMENTS", "section_text": "DA-M acknowledges support from a CONACYT fellowship. The authors would like to thank the\nanonymous reviewers for their constructive comments.\nGottlob Frege. Uber Sinn und Bedeutung. Zeitschrift fiir Philos. und Philos. Krit., (1):25\u201450, 1892\nDiederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. Int. Conf. Learn\n\nRepresent., pp. 1-13, 2014. URL http: //arxiv.org/abs/1412.6980\nMarc\u2019 Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence Level Train-\nNeural Networks. In JCLR, pp. 1-15, 2016. URL http: //arxiv.org/\nKyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the Proper-\nties of Neural Machine Translation: Encoder\u2014Decoder Approaches. Proc. SSST-8, Eighth Work.\n\nSyntax. Semant. Struct. Stat. Transl., pp. 103-111, 2014. URL http: //arxiv.org/pdf/\n\n1409.1259v2.pdf\nSepp Hochreiter and Jurgen Jiirgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):\n1-32, 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735.\nRj Kate, Yw Wong, and Rj Mooney. Learning to transform natural to formal languages. In Proc.\nNatl. Conf. Artif: Intell., volume 20, pp. 1062-1068, 2005. ISBN 1-57735-236-x. URL |http:|\n//www.aaai.ora/Librarv/AAAI/2005/aaai05-168.pho\nR Socher and Cc Lin. Parsing natural scenes and natural language with recursive neural networks.\nIn EMNLP, pp. 129-136, 2011. ISBN 9781450306195. doi: 10.1007/978-3-540-87479-9.\nKai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved Semantic Representa-\ntions From Tree-Structured Long Short-Term Memory Networks. In Proc. 53rd Annu. Meet.\nAssoc. Comput. Linguist. 7th Int. Jt. Conf: Nat. Lang. Process., pp. 1556-1566, 2015. ISBN\n\n9781941643723. URLihttpo: //arxiv.org/abs/1503.0075)\nArun Venkatraman, Martial Hebert, and J Andrew Bagnell. Improving Multi-step Prediction o:\nLearned Time Series Models. Twenty-Ninth AAAI Conf. Artif. Intell., pp. 3024-3030, 2015.\nOrioi Vinyals and Quoc V. Le. A Neural Conversational Model. arXiv, 37, 2015.\nXingxing Zhang, Liang Lu, and Mirella Lapata. Top-down Tree Long Short-Term Memory Net-\nworks. In NAACL-HLT-2016, pp. 310-320, 2016."}, {"section_index": "10", "section_name": "A VARIATIONS ON TOPOLOGY PREDICTION", "section_text": "Besides the topology prediction approach presented in Section[3.1] we experimented with two addi\ntional variations of the proposed doubly-recurrent neuron: (i) using tokens to trigger both depth an\nwidth termination (i.e. implicit topology prediction) and (ii) using tokens for width-stopping dec:\nsion, but predict explicitly depth termination (single topology prediction). Recall that in the mode\nproposed in Section |3.1|both decisions are explicit (double topology prediction). The neurons i\neach of these alternative formulations are depicted in Figure[7] In order to train these two alternativ\nmodels, we add special stopping tokens to the vocabulary, and we pad the training with addition:\nnodes labeled with this token. Besides requiring larger trees and resulting in slower training, w\nempirically observed alternatives (i) and (ii) to result in worse performance. We hypothesize th:\nthis has to do with the fact that when using token-based stopping, topological and label predictio\ndecisions are confounded, which results in less efficient learning.\n%\n\nfF ]peren)\n\nfF ]perend | | Pe |y reas\n\nvt\n\n\u00b0 3 0\n\nhy hy vt hy vt"}, {"section_index": "11", "section_name": "B.1 BACKPROPAGATION WITH DRNN\u2019S", "section_text": "Figure 7: A single unit in each of the three alternative versions of the doubly-recurrent neural net-\nwork, for node 7 with parent p and sibling s. Left: No explicit topology prediction, Middle: single\n(ancestral) topology prediction, Right: double (ancestral and fraternal) topology prediction. The top\n(left) incoming arrows represent the input and state received from the parent node (previous node.\nrespectively).\nDuring training, we do the forward pass over the trees in breadth-first preorder, feeding into every\nnode an ancestral and a fraternal state. For computational efficiency, before passing on the ancestral\nstate to the offspring, we update it through the RNN using the current node\u2019s label, so as to avoid\nrepeating this step for every child node. After the forward pass is complete, we compute label\n(cross-entropy) and topological (binary cross-entropy) loss for every node. In the backward pass,\nwe compute in this order:\n. Gradient of the current node\u2019s label prediction loss with respect to softmax layer parameters\nW,v*, vi: VoL (xi, Xi).\n\n. Gradients of topological prediction variable loss with respect to sigmoid layer parameters:\nVoL(pt, t2) and VoL(p! , t/).\n\n. Gradient of predictive state layer parameters with respect to h?\"\u00a9%),\n\n4. Gradient of predicted ancestral and fraternal hidden states with respect to g/ and g*\u2019s pa-\n\nrameters.\nThe gradients of the input ancestral and fraternal hidden states are then passed on to the previous\nsibling and parent. When nodes have more than one child, we combine gradients from multiple\nchildren by averaging them. This procedure is repeated until the root note is reached, after which a\nsingle (ancestral state) gradient is passed to the encoder."}, {"section_index": "12", "section_name": "B.2 MODEL SPECIFICATION AND TRAINING PARAMETERS", "section_text": "The best parameters for all tasks are chosen by performance on the validation sets. We perforn\nearly stopping based on the validation loss. For the IFTTT task, we initialize word embedding:\nwith pretrained Glove vectors (Pennington et al.|/2014). For both tasks we clip gradients wher\nthe absolute value of any element exceeds 5. We regularize with a small penalty p on the /2 norm\nof the parameters. We train all methods with ADAM (Kingma & Bal 2014), with initial learning\nrate chosen by cross-validation. The parameter configurations that yielded the best results and were\nused for the final models are shown in Table] Details about the four models used for the machin\u00ab\ntranslation task are shown in Table[4]\nTable 3: Hyperparameter choice for DRNNs in the synthetic and IFTTT tasks\nTable 4: Models used in the machine translation task.\nModel | Encoder Decoder Dim RNNLayers Batch\nSEQ2SEQ (Small) LSTM LSTM 150 1 64\nSEQ2SEQ (Large) LSTM LSTM 300 3 64\n\nDRNN (Small) LSTM \u2014DRNN-GRU (Left-Right) 150 1 32\nDRNN (Large) LSTM \u2014 DRNN-GRU (Left-Right) 300 1 32\nWe generate trees in a top-down fashion, conditioning the label and topology of every node o1\nthe state of its ancestors and siblings. For simplicity, we use a Markovian assumption on thes\ndependencies, modeling the probability of a node\u2019s label as depending only on the label of its paren\np(i) and the last sibling s(i) generated before it (if any). Conditioned on these two inputs, we mode\nthe label of the node as coming from a multinomial distribution over the alphabet:\nP(wi|T) = P(w | Wp(i)s Ws(a)) ~ Multi(O.w,\u00a2:) vaca) )\nwhere Pw (2) Ws ( are class probabilities drawn from a Dirichlet prior with parameter a,,. On the\nother hand, we denote by 6? the binary variable indicating whether node i has descendants, and by\nof that indicating whether it has an ensuing sibling. We model these variables as depending only on\nthe label of the current node and its position in the tree:\nP(b? | w;, D.\n\n= P(b! | wi, he\n\n~ BemoulliP\n\n= Bernoulli(p:\n\n\u201c9\u00b0 (Di))\n\n-g/ (Wi)\nIn summary, we use the following generative procedure to grow the trees:\nTask Encoder Dim Batch Learning Rate Regularization p\n\nsynthetic LSTM 50 20 0.05 Ix10-\u00b0\nIF GRU 150 35 0.06 Ix10~4\nIF LSTM 150 35 0.05 5x10~4\nP(b} | 7) = P(bF | w;, Di) = semen we, 9 (Di))\nP(b! |T) = P(b} | wi, Wi) = Bernoulli(p!,, - g/(W;))\nwhere D; is the depth of node i and W; its width, defined as its position among the children of its par-\nent p(i). Intuitively, we want to make P(b* = 1| 7) decrease as we go deeper and further along the\nbranches of the tree, so as to rent \u2018 ate Thus, we model a and g/ as decreasing functions\nwith geometric decay, namely g*(D) = (y*)? and gi ( W)= W with 7*,7/ \u20ac (0, 1). For the\nlabel-conditioned branching bili P(b? | w;) and P wo! j w;), we use Bernoulli distributions.\nwith probabilities drawn from beta priors with parameters (a, 8\u201d) and (a! , 3/), respectively.\nNote that this generative process does create a dependence between the topology and content of th\ntrees (since the variables b* and b/ depend on the content of the tree via their dependence on th\nlabel of their corresponding node). However, the actual process by which labels and topologica\ndecision is generated relies on separate mechanisms. This is natural assumption which is reasonabl\nto expect in practice.\nThe choice of prior parameters is done drawing inspiration from natural language parse trees. We\nwant nodes to have low but diverse probabilities of generating children, so we ee) a slow-decaying\ndistribution with most mass allocated in values close to 0. For this, we use (a, = (0.25, 1). For\nsibling generation, we use (a/ , 8/) = (7, 2), which yields a distribution s weontraed in values se\nto 1, so that nodes have on average a high and similar probability of producing siblings. Since we\nseek trees that are wider than they are deep, we use decay parameters y, = 0.6, yr = 0.9. Finally\nwe use a a, = 10 - 1 for the parent-sibling probability prior, favoring non-uniform interactions.\nUsing this configuration, we generate 5000 sentence-tree pairs, which we split into training (400C\nexamples), validation (500) and test (500) sets. The characteristics of the trees in the dataset are\nsummarized in Table[5]\nTable 5: Synthetic tree dataset statistics. Tree size is measured in number of nodes, depth is the\nlargest path from the root node to a leaf and width is the maximum number of children for any node\nin the tree. The values reported correspond to means with one standard deviation in parentheses.\n[he IFTTT dataset comes with a script to generate the data by crawling and parsing the recipes\nUnfortunately, by the time we ran the script many recipes had been removed or changed. We there\nfore resorted to the original dataset used by 5). We converted these recipes int\nour tree format, assigning a node to each element in the first three levels (channels, functions anc\narguments, see figure For the parameters level, many recipes have sentences instead of singl\nokens, so we broke these up creating one node per word. The last two layers are therefore the mos\nopologically diverse, whereas the structure of the first two layers is constant (all trees have channel:\nind functions). A very small fraction (< 1%) of trees that could not by parsed into our format wa:\n>xcluded from the dataset.\nTable 6] shows various statistics about the topological characteristics of the recipes in the IFTTT\ndataset. The middle columns show percentage of trees that contain nonempty arguments and param-\neters in trigger (IF) and action (THEN) branches. Almost all recipes have none empty arguments and\nparameters (and thus depth 4, excluding the root), and a lower percentage\u2014but still a majority\u2014has\narguments and parameters on the trigger side too. The last two columns show tree statistics pertain-\ning to the complexity of trees after conversion to our format. The distribution of tree sizes is mostly\nconcentrated between 4 and 30 nodes, with a slow-decaying tail of examples above this range (see\n\nFigure|8).\n1. For each w; \u20ac V, draw p%,, ~ Beta(a\u00ae, 8%) and pi, ~ Beta(al, Bf)\n2. For each pair (w;, w;) draw Ow; 0; ~ Dir(a\u201d)\n3. While there is an unlabeled non-terminal node i do:\n\u00a2 Sample a label for i from w* ~ P(w|wpci), Ws(s)) = Multi(Ow, 0) wee)\ne Draw ba ~ P(b*|w*, D) = Bernoulli(y? - Piu(iy)\u00bb Where D is the current depth. If\nb* = 1, generate an node k, set p(k) = i, and add it to the queue.\ne\u00a2 Draw ba ~ P(b!|w*, D) = Bernoulli(y}\u201d \u201cPhyiy)> where W is the current width. I\nbf = 1, generate an node k, set s(k) = i, and add it to the queue.\nFold Examples Size Depth Width\n\ntrain 4000 3.94 (3.38) 1.42 (0.66) 2.89 (1.71)\ndev 500 4.13 (3.21) 1.46 (0.67) 2.91 (1.76)\ntest 500 3.64 (3.21) 1.32(0.61) 2.80 (1.71)\nTable 6: IFTTT dataset statistics. The middle columns show percentage of trees that contain\nnonempty arguments and parameters in trigger (IF) and action (THEN) branches. The last column\nshows average (with standard deviation) tree size and depth.\nHas args. (%) Has params. (%) Tree Size\nFold Examples\nTrigger Action Trigger Action # Nodes Depth\ntrain 67,444 69.10 98.46 65.47 96.77 16.93 (31.71) 3.99 (.13)\ndev 4,038 69.44 98.46 66.42 96.31 16.55 (8.75) 3.99 (.11)\ntest 3,725 68.38 98.66 65.64 97.50 16.43 (8.18) 3.99 (.12)\n100 Dev Test Train\ng\n2\n#\n8\na\ng\n\u00a3\n\nee 2\n88 8k 8\nFrequency\n\n10000\n12000\n14000\n16000\nFigure 8: Tree size distribution in the IFTTT dataset.\nRegarding the content of the trees, the labels of the nodes in the first two levels (channels and\nfunctions) come from somewhat reduced vocabularies: 111 and 434 unique symbols for the trigger\nbranch, respectively, and 157 and 85 for the action branch. The lower layers of the tree have a much\nmore diverse vocabulary, with about 60K unique tokens in total. On the source side, the vocabulary\nover the sentence descriptions is large too, with about 30K unique tokens. The average sentence size\nis 6.07 tokens, with 80% of the sentences having at most 12 tokens.\nFor the perturbation experiments, we randomly selected 50 sentences from among those in the tes\nthat could be easily restructured without significantly altering their meaning. The type of alteration:\nwe perform are: subordinate clause swapping, alternative construction substitution, passive/active\nvoice change. In doing this, we try to keep the number of added/deleted words to a minimum, t\u00a2\nminimize vocabulary-induced likelihood variations. When inserting new words, be verify that they\nare contained in the original vocabulary of 20K words. In Table [7| we show a few examples of the\nsource, original target and perturbed target sentences.\nStarting from a preprocesseq\u00ae| 2% sub-selection of the English-French section of the WMT14\ndataset, we further prune down the data by keeping only sentences of length between 5 and 20\nwords, and for which every word is within the 20K most frequent. The reason for this is to simplify\nthe task by keeping only common words and avoiding out-of-vocabulary tokens. After this filtering,\nwe are left with 53,607, 918 and 371 sentences for train, validation and test sets. After tokenizing,\nwe obtain dependency parses for the target (English) sentences using the Stanford CoreNLP toolkit\n\n(Manning et al.|/2014).\nTable 7: Example structural perturbations for likelihood robustness experiments.\nsource \u201capr\u00e9s un accord de paix sign\u00e9 en 1992 elle est devenue un parti d opposition.\u201d\ntarget \u201cafter a 1992 peace deal it became an opposition party.\u201d\nperturbation \u201cit became an opposition party after a 1992 peace deal.\u201d\nsource \u201ccela repr\u00e9sente environ 9 milliards de grains de mais.\u201d\ntarget \u201cthat\u2019s about 9 billion individual kernels of corn.\u201d\nperturbation \u201cthis amounts to about 9 billion kernels of corn.\u201d\nsource \u201cY\u2019exercice de fonctions publiques est une question de service public.\u201d\ntarget \u201cpublic office is about public service.\u201d\nperturbation \u201cthe exercise of public functions is a matter of public service.\u201d\nsource \u201cnous avons ainsi effectu\u00e9 depuis la fin de \u2019hiver dernier 64 interventions.\u201d\ntarget \u201chence we have carried out 64 operations since last winter.\u201d\nperturbation \u201cwe have therefore carried out 64 operations since last winter.\u201d\nsource \u201con estime qu\u2019un enfant sur 2000 n\u00e9s chaque ann\u00e9e n\u2019est ni un garcon ni une fille.\u201d\ntarget \u201can estimated one in 2000 children born each year is neither boy nor girl.\u201d\n\npnerturbation\n\n\u201cit is estimated that one in everv 2000 children born every vear is neither a bov nor a girl.\u201d\nlOO \u00a9\n\nlOO\u00a9\n\n(a) Encoder sentence input: \u201cROOT P R C\u201d\n\nN=1000\n\nROOT\n\nN=1500\n\nN=3500\n\nROOT\n\ngold\n\nROOT\n\n(b) Encoder sentence input: \u201cROOT Z T Y Q\u201d\n\nN=500)\n\nN=1000\n\nN=1500\n\nOO\u00ae\n\nN=3500\n\ngold\n\n(c) Encoder sentence input: \u201cROOT K T V\u201d\n\nN=500\n\nN=1500\n\nN=2500\n\nN=4000\n\n(d) Encoder sentence input: \u201cROOT QF VR GD A\u201d\n(a) Encoder sentence input: \u201cROOT P R C\u2019\n(b) Encoder sentence input: \u201cROOT Z T Y Q\u2019\nFigure 9: Selected trees generated by the DRNN decoder from vector-encoded descriptions for test\nexamples of the synthetic tree dataset. Trees in the same row correspond to predictions by models\ntrained on randomly sampled subsets of size N of the training split. We present cases for which the\nprediction is accurate (a,c) and cases for which it is not (b,d). Note how in (d) the model predicts\nmany of the labels correctly, but confuses some of the dependencies (edges) in the tree.\nN=500\n\n000\n\nN=1500)\n\nN=3500\n\ngold"}]
HkzuKpLgg
[{"section_index": "0", "section_name": "EFFICIENT COMMUNICATIONS IN TRAINING\nLARGE SCALE NEURAL NETWORKS", "section_text": "Linnan Wang\nSchool of Computer Science\nGeorgia Institute of Technology\nSchool of Computational Science & Engineering\nGeorgia Institute of Technology\nWe consider the problem of how to reduce the cost of communication that is\nrequired for the parallel training of a neural network. The state-of-the-art method.\nBulk Synchronous Parallel Stochastic Gradient Descent (BSP-SGD), requires many\ncollective communication operations, like broadcasts of parameters or reductions\nfor partial gradient aggregations, which for large messages quickly dominates\noverall execution time and limits parallel scalability. To address this problem, we\ndevelop a new technique for collective operations, referred to as Linear Pipelining\n(LP). It is tuned to the message sizes that arise in BSP-SGD, and works effectively\non multi-GPU systems. Theoretically, the cost of LP is invariant to P, where P is\nthe number of GPUs, while the cost of the more conventional Minimum Spanning\nTree (MST) scales like O(log P). LP also demonstrates up to 2x higher bandwidth\nthan Bidirectional Exchange (BE) techniques that are widely adopted by current\nMPI implementations. We apply these collectives to BSP-SGD, showing that the\nproposed implementations reduce communication bottlenecks in practice while\npreserving the attractive convergence properties of BSP-SGD."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Scaling up neural networks with respect to parameter sizes, training sets, or both has drastically\nimproved the state-of-the-art performance in several domains ranging from scene understanding\nspeech recognition, even to playing Go against professional players. Although training a larg\nnetwork saturated with nonlinearities is extremely time-consuming, the benefits brought forth by\nlarge-scale models has sparked a surge of interest in parallelizing training on multi-GPUs. The\nparallelization of SGD demands synchronizations to exchange gradients and parameters per iteratior\nand this introduces significant communication overhead. Previous studies have focused on trading the\nSGD convergence rate for fast gradient updates, such as stale or asynchronous SGD, 1-bit compressec\ngradient, etc. However, these methods are rarely adopted by Deep Learning frameworks as they\ndepend on the balance between the enhanced iteration throughput and the decelerated convergence\nrate. Since BSP retains the convergence properties of SGD. its optimization should be of interest.\nThe gradient aggregations and parameter exchanges in BSP SGD are typical operations of commu-\nnication collectives (Chan et al.|{2007). Messages in the large-scale neural networks training are\ndense, long, and fixed-length, while the performance of collective algorithms is drastically sensitive\nto these attributes. Besides, the processing speed is several orders of magnitude faster than the\nnetwork unidirectional transmission rate. These prioritize the utilization of network bandwidth in\nthe collective design. However, we have seen sub-optimal collective algorithms, e.g. MST and BE,\nwidely adopted by the deep learning community (Agarwal et al (Jia et al.| (Duchi et al.|\nq'\n\n(2011). MST is only suitable for the latency dominant case such as frequent short message exchanges,\nwhile the bandwidth term of BE can be further improved (Thakur et al.|[2005).\nWei Wu & George Bosilca\nBig Data Research Center\nUniv. of Electr. Sci. & Tech. of Chin\nzlxu@uestc.edu.cn"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Figure 1: Illustrations of various methods to accelerate the training. Black blocks stands for computa\ntions, and white blocks stands for communications. CUDNN reduces the computation cost, while we\nreduce the communication cost.\nIn this paper, we introduce new Linear Pipeline based collectives for multiGPU training. Th\ncollectives demonstrate O(log(P)) speedups over MST collectives and up to 2x speedups over BI\nbased ones; the bounds only hold in training large neural networks. In particular, the theoretica\nanalysis and the implementation yield an interesting insight that the cost of our design is invarian\nto GPU numbers, i.e., the cost of collective operations on 2 GPUs is similar to 20 GPUs. The desig\nexplores message granularity to maximize simultaneous bidirectional data exchanges. In specific\nit divides a message into fine-grained blocks as the basic communication element. A GPU send\na block (via DMA 1) while receiving (via DMA 2) a new block from a neighbor. The copies ar\nasynchronously launched on two GPU streams, and numerical operations further overlap data copie\nAs a result, our method yields a highly efficient pipeline over which messages for neural networ\ntraining may be exchanged.\nThe proposed collective design achieves 2.3x to 360.55x speedups over Open MPI alternatives on\n6 GPUs. In training GoogLeNet, we set up the same BSP SGD implementation with different\n\nunderlying collectives. Our design demonstrates up to 1.7x convergence speedup over MST based\nCaffe.\nThe first group of approaches relaxes synchronous models of SGD to increase the iteration throughpu\n(Dean et al. (2012), Zinkevich et al.|(2010)). In this case, the relaxed SGD enables computation:\non a GPU to partially overlap with communications on others as demonstrated in Fig{Ic]and Fig[Ic\nproposed a lock free Asynchronous SGD (ASGD) that entirely gets rid of the\nsynchronization requirement by allowing free concurrent parameter updates. But the relaxation onl\nworks well on sparse learning problems. In response, |Ho et al.| (2013) introduced the concept o\nstaleness by bounding the fastest and the slowest machine within a few iterations of each other t\nensure correctness. These relaxations claim to be effective as the enhanced iteration throughpu\noffsets the disadvantages or F degraded convergence rate. However, recent advances in deep learnin;\n(2016)) have reestablished the advantages of BSP over relaxed ones in trainin;\nneural networks. This reiterates the importance of studying BSP SGD.\nThe second group of approaches tries to reduce the overall communication volume. |Seide et al\nquantized gradients from 32 bits to 1 bit to reduce the message length, but the lost gradient\ninformation decelerates the convergence rate. Another approach is to accelerate the convergence with\na large batch. (2012) shows the convergence rate of mini-batch SGD is O(1//Tb+1/T)\nwith b being the batch size. This result indicates a large batch needs fewer iterations to find a solution\nand thereby fewer overall synchronizations. However, unwieldy increasing the batch size is also\nunfavorable under limited computing resources demonstrated by [Wang et al.|(2016b). Please note\nthese methods still need synchronizations, and our work will further improve their performance.\n(a) legend (b) Reference SGD (c) ASGD\n\nGPuo GPUO GPuo\nGPu1 staleness| \u2014 Gpyy GPU1\n<>\"\n\n(d) Stale SGD (e) CUDNN (f) ours\nThe communication overhead has been widely identified as the major bottleneck in the data-parallel\nSGD (Shamir| (2014), |i et al.| (2014)). The data parallelism linearly adds the processing power\nby concurrent gradient computations with multiple GPUs. But it also requires synchronizations to\ncollect partial gradients or to broadcast parameters. In practice, the communication rate is several\n\norders of magnitude slower than the computation 2013). Various approaches have been\nproposed to reduce the overhead.\nint \u2014\u2014 Tv\nT1IME\u2014\u2014\u2014\u00bb TM > nrg out ve\n\nag gS: SENT FROM GPU2\nREDUCE [bp] |e\nceuo [a[ble]d cpuo [a0]bO] cO]d0) apaipg Ot\na \u2018oa ewes\nWAAAY mencrr[ad [bi] cl] a soe wewcor[ad] ba] cL rea\nceuimener[a Tb Te Td S: GPU Stream ret curr [Tee Buffer PO conn c y\nAy pane iE VAAN\nWAAC pr mencer [a2] b2] 2] d2) ency b2]c2]d2]a\"[b\"]c\"] a\ncpuzmencer [a] b]e]d Gpu2 eit | [ao] fail epu2 nan\ncomur [a ]b\"[e\" Td compur b\u201d\n\n(a) broadcast (b) reduce (c) allreduce\nFigure 2: The data flow of broadcast, reduce and allreduce on 3 GPUs\nThe third group of approaches conducts system optimizations to minimize the communication cos\n(Wang et al} 2016a). Agarwal & Duchi (201 is) and|Agarwal et al. 2014) presented partial gradient\naggregations guided with a MST that takes log(P) steps to fully synchronize the model. Dee\nlearning frameworks such as Caffe also adopt this approach. Unfortunately, MST i\nonly suitable for latency dominant scen: .e. high frequent short messages). Although collectiv\n\nalgorithms have been thoroughly discussed in the HPC community (Almasi et al. (2005), |Gabrie\netal] (2004), Shipman et al.|(2006)), few have studied their performances for the deep learning. Th\nPp\n\nerformance of collectives varies significantly with different message lengths and network topologie:\nwhile messages in deep network training are dense, long and fixed-length. Therefore, it is imperativ\nto address such peculiarities in the collectives. [Worringen| (2003) proposed a pipeline collectiv\nmodel in shared memory environment for CPU data, but communications of different MPI processe\nsharing the same CPU memory bus within the same CPU socket. This causes bandwidth competitio:\namong different processes, thereby poor performance for the collective communication in share:\nmemory environment for CPU data. In contrast, PCI-E is bi-directional. The latest GPUs also featur\ntwo independent DMA engines for simultaneous independent in/out communications. The hardwar\nupdates pave the way for LP based GPU communications.\nThis section presents a new LP based MultiGPU collective design ensued by the concrete proof of its\nperformance in training neural networks. The general idea of LP is as follows: a) we dissect a long\nmessage into fine-grained blocks. b) a GPU receives a block from the prior GPU via DMA1 while\nsending a block to the next one via DMA2. Please note each block exchange utilizes an independent\nphysical link, and the entire network is fully utilized once the pipeline is filled.\nBroadcast tackles the synchronizations of parameters among multiple GPUs. It copies the source\nvector to every GPU. Fig 2ajillustrates the data flow of the broadcast collective on 3 GPUs. GPU is\nthe source, and the rest are destinations. Broadcast starts with filling the pipe by copying block a\non GPUO to GPU] at step 1. Let\u2019s focus on GPU1. At each step, GPU1 receives a block from GPUC\nvia DMA1, while GPU1 is also sending a block to GPU2 via DMA2. The data exchange in either\n\nway utilizes an independent link and DMA engine to achieve the maximal unidirectional rate. Hence\nthe bandwidth is fully exploited.\nReduce aggregates the partial gradients to reconstruct the global one. It combines the elements\nprovided in the vector of each GPU, and returns the combined value in the receive vector to a\nspecific GPU. It supports basic arithmetic operations such as summations and multiplications. Fig[2b]\nillustrates the data flow of the reduce collective. GPU2 is the root that aggregates the vectors across\nall GPUs. Reduce starts with filling the pipe by writing block a0 to a buffer on GPU1. Then,\nGPUl1 reduces the received block a0 with a1 to yield a\u2019 (within the rectangle of Fig[2b). Please note\nthe computation is much faster than the communication, we assume no latency on it. In practice,\ncomputations are further overlapped with communications. In the next step, GPU1 retrieves 0 from\nGPU0 to reduce to b\u2019 via DMA 1, while GPU1 is also sending a\u2019 to GPU2 to reduce to a\u201d via DMA\n2. b\u201d, c\u2019, d\u201d are reduced at steps 3, 4, 5 in a similar fashion.\nAllReduce enables us to collect partial gradients and broadcast the latest parameters with only one\nsynchronization point per SGD iteration. It combines vectors from all GPUs and distributes the\nTable 1: The estimated costs of 3 collective communications.\nBidirectional Exchange (BE) Minimal Spanning Tree (MST) Linear Pipeline (LP)\n\nbroadcast (log p + p \u2014 l)ja + 2(2n)B log p(a + nB) (p-1+ \u00ae)a+ (b(p\u20141) +n)B\nreduce (2log p)a + 2(P=*n)B + ( any log p(a + nB + ny) (p-1+ #)a+ (bp\u2014b+n)(B +7)\n\nallreduce (2log p)a + a(eet n)B + (2 An)y log p(2at 2nB+ny) \u2014 Ap\u20141+ E)at (bp\u2014 b+ n)(28 +4)\nOur collective is also specifically designed to accommodate GPU features such as asynchronous\nkernel launches and multi-stream processing. In the rectangle of Fig 2a] it demonstrates the data\ntransfers are asynchronously launched on two separate streams. The copies happening in the red\nsteps are scheduled on one stream while copies in the black steps are scheduled on another stream\nThis overlaps the overhead of GPU kernel launches, further improving the pipeline. We illustrate the\ndata flow of the collectives on 3 GPUs. If there are k GPUs, GPU n, 0 < n < k \u2014 1, duplicates the\nsame communication pattern on GPU 1."}, {"section_index": "3", "section_name": "3.1 ARCHITECTURE ANALYSIS", "section_text": "LP is the optimal collective algorithm to fully exploit the network bandwidth of a MultiGPU system\nEven though PCI-E supports full-duplex communication between any two endpoints, each PCI-E\nendpoint device only has one input and output port. This results in bandwidth competition if a GPL\nis receiving from multiple GPUs. Similarly, each PCI-E switch only contains one input and outpu\nport used for inter-switch communication, and inter-switch communications of the same direction\nalso compete for the PCI-E bus. It is known that any delay in data movement between two GPUs\ninterrupts the pipelining in the collectives. In such architecture, the communication from parent:\nto children in MST based collective algorithms will compete for the same PCI-E bus, therefore\nbreaking pipelining. The data exchange of BE also suffers from the inter-switch communication\ncongestion in one direction. In contrast, LP connects all GPUs into a chain, and data always flow ir\none direction. Hence, data movements between two GPUs exclusively occupy the entire PCI-E bus\nensuring uninterrupted pipelining.\nProposition 1 [f the network latency a \u2014 0, Linear Pipeline collectives provide an O(log p) speedu\nover Minimal Spanning Tree collectives and up to a 2 times speedup over Bidirectional Exchang\ncollectives as the message size n \u2014 ov.\nProof. First, we derive the costs of the three Linear Pipeline collectives. According to Fig[2} [2| the\nlength of pipeline is p \u2014 1 + } blocks assuming each block to be b bytes. A block exchange takes\na+ 8b+ yb (with reduce) or \u2018a + {b (without reduce). Consequently, broadcast essentially costs\n(a+ 6b)(p\u20141+ #) = (p\u20141+ \u00a5)a+ (b(p\u20141) +n), and reduce costs (a+ Bb+7b)(p\u20141+ #) =\n(p\u20141+ $)a + (b(p \u2014 1) + n)(6 + 4). allreduce is approximately equivalent with a reduce\nresult back to them. Mathematically, it is equivalent to a reduce followed by a broadcast. However,\nallreduce is more efficient than two separate calls as it only needs to fill the pipeline once. For\nexample, it takes 9 timesteps to allreduce 4 message blocks, while broadcast + reduce will cost 10.\nFig|2clillustrates the data flow of the allreduce collective. It starts with reducing a\u201d, after which a\u201d\nis broadcast to GPU1 and GPU2 at step 5, 6 respectively. Please note dO utilizes the outbound DMA\nat step 4, therefore a\u201d has to wait until step 5. b\u2019\u2019, c\u2019\u2019, d\u2019 are processed in a similar fashion.\nT=a+6n+yn\nwhere a is the latency or startup time of sending a message, ( and 7 is the transmission rate and\nreduce rate measured by time per byte, and n is the message size in bytes. We also denote p as the\nnode count, and b as the block size (in bytes) in the pipeline.\nfollowed by a broadcast. Therefore, the allreduce\u2019s cost is broadcast\u2019s cost plus reduce\u2019s cost, i.e\nWp\u2014-1+ 2)a+ (bp\u2014b+n)(28 +7).\nSecondly, we derive the costs of the three Minimal Spanning Tree collectives. MPI adopts MST t\u00ab\nbroadcast or reduce short messages (Thakur et al.|(2005)), the length of which is less than 12 KB\nThe core concept of MST is to organize p GPUs into a balanced tree of height [logp]. Then, it take:\n[log p] steps to traverse all GPUs in the tree. Each step carries the message of length n, resulting ir\nthe cost of broadcast to be the tree height times the cost per step, i.e. log p(a + n8) (we omit the\nceiling for simplicity). Similarly, MST reduce is log p(a + n8 + ny), and MST allreduce is also ;\ncombination of broadcast and reduce. Please note the latency term, log pa, is the smallest amon:\nalgorithms in Tablef]] and the bandwidth term, log pnJ, is the slowest as log pnG >> nf. Therefore\nMST is widely used for high frequent exchanges of short message.\nFinally, we present the costs of the three Bidirectional Exchange collectives. MPI broadcast handle:\nlong messages with a MST scatter followed by a BE allgather. Please refer to[Chan et al.|(2007\nfor the analysis of BE collectives. Basically, scatter costs )>),\u2014 [leap] (a+2-'nB) = log pat a inf\nwhile allgather costs (p \u2014 l)a + Pt nB. The cost of broadcast i is the sum of these two. The\nMPI long message reduce consists of a reducescatter plus a gather, while allreduce consists 0!\na reducescatter and a allgather. The cost for reducescatter is log pa + P18 + PAny, anc\nboth the costs of gather and allgather are log pa + p= 178 (also infchan eal] 2007. Tabl\nsummarizes the costs of broadcast, reduce and allreduce for the three different underlying\nalgorithms.\nThe proposition holds under the assumptions of a \u2014 0 and n \u2014 oo, and these assumptions are\nlegitimate for the training of large scale neural networks on multiGPUs. Nowadays, the PCI Express\nx16 effectively reduces the latency a down to 10~7s. The current two sockets shared memory\nmachine supports up to8 GPUs indicating limited p in practice. Let\u2019s take an appropriate block size\nb to ensure p < } and aj ~ 0. This enables us to safely ignore the latency term, e.g. log pa in\nMST broadcast. On the other hand, current deep convolutional neural network uses a tremendous\nnumber of parameters. For example, AlexNet uses 50 MB parameters. The transmission rat\nB ~ 10\u00b0 Byte/Seconds. Compared to the trivial latency term, the bandwidth term dominates the\nentire cost T\u2019. This result leads us to simplify the costs of BE, MST, and LP based broadcast (Table.\ni) to be 22\u2014= -1nB, nB log p and (b(p \u2014 1) + n)8, obtaining the following equations:\nCompared with broadcast, reduce has the additional 7 term. Please note the processing speed of\nGPUs exceeds TFLOPs implying the term 7 * n \u2014 0. Therefore, it is also legitimate to ignore the 7\nterm, and it yields the same result Tyeduce.BE/Treduce.LP < 24nd Treduce. mst /Treduce.LP < log p.\nThis completes our proof of the proposition 1.\nAnother interesting point is the cost of Linear Pipeline is invariant to GPU count p regardless of\nmessage length n. This implies broadcasting a vector to 8 GPUs should cost the same as broadcasting\nto 2 GPUs. In practice, we set the block size b around 64 KB, and p is within 101. This suggests\nthe bandwidth term, e.g. the cost of LP broadcast (bp \u2014 p+ n)8 ~ n8. Hence, the cost of LP\ncollectives are less likely to be affected by GPU counts p."}, {"section_index": "4", "section_name": "3.3. DEEP LEARNING WITH EFFICIENT BSP SGD", "section_text": "1\nToroadcast.BE 2(1 \u2014 >) 2\nThroadcast LP 1+ 2(p\u20141)\n\nThroadcast_MST logp\n\n\u2014\u2014 < logp\nToroadcast-LP b(p=1)\nWe formulate the neural network training as the following optimization problem. Let w be a loss\nfunction with weight vector w as function parameters that takes randomly sampled images dy as the\nAlgorithm 1: BSP SGD with communications/computations overlapping.\nwhile not converge do\n\nbroadcast(w?)\n\nfor i \u20ac [0, 1, ..., max_layers] do\nnonblocking_broadcast(w'*\")\nForward(i)\nsync_broadcast()\n\nBackward(mazx_layers)\n\nfor i \u20ac [max_layers \u2014 1,...,1,0] do\nnonblocking -reduce(V with)\n\nBackward(?)\n\nsync_reduce()\n\n| Weta = GradientUpdate()\nAlgorithm 2: BSP SGD uses broadcast + reduce.\nwhile not converge do\nVwsub = ForwardBackward(d)\nVw =reduce(Vwsub)\nif root then\nL We+1 = GradientUpdate()\nbroadcast(wet+1)\nbarrier /* sync new w\n\n\u00ab/\n\nYauneuwne\n\n=\ninput. The objective of training is to find an approximate solution to the following problem:\nIn Alg] synchronizations rely on broadcast and reduce. Each GPU calculates a partial gradient\nreferred to as Vws.\u00bb. The master GPU reconstructs Vy by reducing all Vs\u00bb. Then, the GPUs\nsynchronize the latest weight, w, by broadcasting.\nIn Algp} synchronizations only rely on allreduce. The differences between this and Alg[2Jare that\n1) there is only 1 synchronization point; 2) every GPU computes the gradient update. However, the\nparameters are not consistent after several iterations due to the precision issues of float multiplications\nin GradientU pdate. We synchronize w every 5 iterations to enforce consistency while still retaining\nthe benefit of efficient pipelining in allreduce (line 7-8 Alg/3).\neoverlapping communications with computations: Another approach is to overlap communica-\ntions and computations for each network layer. In the forward pass, GPUs broadcast network\nparameters of layer t+1 during forward computations at layer t. In the backward pass, GPUs redz\nAlgorithm 3: BSP SGD uses allreduce.\new Ww\n\n\u00ab/\n\n1 while not converge do\n\n2 Vusub = ForwardBackward(de)\n\n3 Vv =allreduce(Vwsub)\n\n4 barrier /* collect Vwsub\n5 We+1 = GradientUpdate()\n\n6 if iter%5 = 0 then\n\n7 L broadcast(we+1)\nmin Etuw(a)} = | dw (de )dP\n7}\nA typical neural network training iteration consists of a forward and backward pass. The forward\npass yields a loss that measures the discrepancy between the current predictions and the target; The\nbackward pass calculates the gradient, the negative of which points to the steepest descent direction.\nThe gradient descent updates the parameters, w, as follows:\nwe =w lt mV vw (de)\nGuided with Data Parallelism, BSP SGD evenly divides dy, into p slices di, d?, ..., dP so that every\nGPU computes a partial gradient from di in parallel. The global gradient is equivalent to the average\nof partial gradients. After finishing the gradient update, w\u2019 is synchronized to all GPUs. We integrate\nthe proposed collectives into this process to harness parallel processing capabilities of multiGPU\nsystem. In this paper, we discuss two approaches to BSP SGD implementations.\nefork and join: This approach forks the gradient computations, and joins partial gradients with\ncommunications. In this case, communications do not overlap with computations. Alg2|and AlgB]\ndemonstrate two collective based implementations using 2 and 1 synchronization points, respectively.\n4k40m > 4k40m\n\n5\nMessage Size in MB\n\n(a) Broadcast (b) Reduce (c) AllReduce\n\nMessage Size in MB\n\n10!\nMessage Size in MB\nFigure 3: The performance of different collective algorithms at different message sizes on 4 K40m\nTime to Broadcast 200MB\n\n\u2014e-BE\n\n\u20144\u2014 LP\n\nTime to Reduce 200MB\n\n12\n=o BE if |= BE\n\u2014< MST \u2014\u00a2 MST --\n\u2014#\u2014LP ost [\u2014#-LP\n\n@\n\n\\\n\nnaga == 9go5=4e==\n\n3 4\nGPU K40m Count\n(a) Broadcast\n\nTime to AllReduce 200ME\n\n* GPU K40m Count\u201d\n(b) Reduce (c) AllReduce\n\n3 4\nGPU K40m Count\nFigure 4: The scalability experiment: it measures performance variations with increasing GPUs.\npartial gradients of layer t+1 during backward computations at layer t. As a result, layer-wise compu\ntations partially overlap with communications further improving the SGD efficiency. Alg{Toutlines\n\nthe general idea of overlapping communications and computations during network training. We use\nnonblocking collectives to achieve the overlap."}, {"section_index": "5", "section_name": "4 EXPERIMENT", "section_text": "The MST and BE implementations used in benchmarks are Caffel?|and OpenMPI. Caffe optimize:\nthe GPU placement in an MST to fully utilize inter-GPU peer to peer (P2P) access. OpenMPI and ow\nimplementation, similar to Caffe, also take advantages of P2P. We set up AlexNet and GoogLeNe\ntraining using the three BSP SGD algorithms proposed in section [3.3]\nThe theoretical analysis indicates both the cost of LP and BE collectives are invariant to the GPU\ncount p, while the cost of MST increases with p by a factor of logp. This is also noticeable in the\n\u201cCaffe implements an MST based broadcast and reduce for the multiGPU training.\n4k40m > 4k40m\n\nTime\n\n10! 1\nMessage Size in MB Message Size in MB Message Size in MB\n\n(a) Broadcast th) Reduce (c) AllReduce\nTime to Broadcast 200MB\n\n10\u00b0 2 B12\n=o BE 3 8 if |= BE\n\u2014< MST g \u201c \u2018| | MST eee\n\u20144\u2014 LP Py Sos |eLP | 0\n$3 Boop\n2 ied\nBoa\n4 Loa\n\u2014 e === $=\u2014\u2014g=\u2014\u2014 =\nGPU K40m Count GPU K4om Count GPU K40m Count\n(a) Broadcast (b) Reduce (c) AllReduce\nepros and cons of both approaches: The cost of Algjor Alg[lis comm + compt, while the cost\nof Alg|llis max(comm, compt). If the network has over a few hundred MB of parameters, the\noverlapping will be significantly better than the fork and join approach. However, Alg|2jand AlgBlare\nrelatively easy to implement, and the performance on networks < 100 MB is similar to that of Alg/T]\nFig|3|presents the performance of LP, MST, and BE based collectives at different message sizes on 4\nK40m. The LP broadcast demonstrates an average of 29.2x and 2.3x speedup over BE and MST\nbased alternatives in Caffe and OpenMPI; the LP reduce demonstrates an average of 360.55x and\n8.7x speedup over BE and MST reduce, and the LP allreduce demonstrates an average of 109.2x\nand 7.9x speedup over BE and MST allreduce. In theory, LP is approximately 2x faster than both\nthe MST (p = 4 \u2014 logp = 2) and BE approaches. An extraordinary speedup against Open MPI\nis observable due to inefficient data movement in Open MPI, which moves data to host RAM to\nperform reduce operations on the CPU before being copied to the target GPU. Instead, we perform\nreduce on the GPUs, and data blocks directly flow to the target GPU via P2P access. The overlapped\nreduce computations with communications enables our reduce and allreduce to be 8x faster than\nthat of MST. At each step of MST, GPUs reduce the incoming data only after all the data is available\nIn contrast, our fine-grained block design enables communications and computations to overlap by\nreducing a block while receiving a new one in the pipeline. broadcast only involves data copies, and\nboth we and Caffe use P2P to transmit the data. Therefore, the speedup of MST broadcast (2.3x).\nconforms to the 2.0x theoretical prediction.\ntraining loss\nos\n\nAlexNet 256MB, iters = 30000, batch size = 1000\n\nGoogLeNet 51MB, iters = 67000, batch size = 80\niN\n\n\u2014>\u2014 BE Alg. 9 \u2014>\u2014 BE Alg.1\n\u2014---MST Ig 1 ---- MST Alg.1\n-- BE Quetiap Alg.3 8 8 BE Overlap Alg.3\n8, LP Alg.1\ntp alg 3 > LP Alg.2\nLP Overlap Alg.3 \u00a3\u00b0 \u2014LP Overlap Alg.3\n<\ng5\ns4\n3\n2\n\nseconds x04\n\n(a) AlexNet\n\nseconds x1o*\n\n(b) GoogLeNet\nFigure 5: The training losses in fixed iterations on 4 K40m. We set GoogLeNet Ir = 0.01. AlexNet\nstarts at Ir = 0.015, and set to 0.0015 after the average loss < 2. The solver is SGD + momentum, and\nthe dataset is ImageNet."}, {"section_index": "6", "section_name": "4.2 IMPACT ON THE NEURAL NETWORK TRAINING", "section_text": "Fig|5|demonstrates LP collectives effectively reduce the total training time without affecting SGD\u2019\nconvergence properties in training large scale neural networks. We use inspurCaffe, Caffe and cuhk\u2019\nCaffe branch to benchmark the performance of BE-Alg.1, MST-Alg.1 and BE-Overlap-Alg.3. Wi\nalso implement Alg.1,2,3, integrated with LP collectives, in Caffe to ensure consistency. Please not\nthe model size affects the communication time, while the batch size affects the computation time\nWe carefully set these parameters to cover as many cases as possible. Please refer to the caption:\nof Table 2]and Fig|5]for experiment details. We assume these algorithms have similar convergenc:\nspeeds in iterations as losses of AlexNet are approximately 1 after 30000 iterations and losses o\nGoogLeNet are approximately 2 after 67000 iterations. However, the time taken to reach the targe\nloss varies dramatically. For example, the speedups of LP-Overlap-Alg.3 over BE-Alg.1 in trainin;\nAlexNet and GoogLeNet are 2.12x and 2.19x, respectively.\nThe experiments demonstrate that the speed of the three proposed BSP SGD algorithms is Alg.3 >\nAlg.2 > Alg.1. The result conforms to our expectations as the cost of Alg.3 is max(comm, compt)\nwhile the cost of Alg.1 and Alg.2 is comm + compt. However, the performance gain is quite limitec\nfrom Alg.2 to Alg.3 as there is little room left for reducing communications from LP Alg.2 to Alg.?\n\nas demonstrated in Table[2] If the model parameters keep increasing, we expect Alg.3 to be more\nefficient than Alg.2.\nTable 2: The iteration profile. comm stands for communications, and compt stands for computations.\n7% represents the percentages of communications in an iteration. The statistics are the average of\n30000 AlexNet iterations, and 67000 GoogLeNet iterations. We set the batch size of AlexNet to\n1000, and GoogLeNet to 80. AlexNet and GoogLeNet are 256MB and 51MB, respectively.\nscalability experiment demonstrated in Fig/4] Please note there is a cost jump between 4 and 5 GPUs.\nCommunications have to go through QPI after 4 GPUs incurring the additional cost of copying\nthrough the host RAM. The cost of the Linear Pipeline method robustly stays the same if GPU counts\n=[2,3,4] or [5,6], and QPI explains the inconsistency. The communication steps of MST for 2,3,4,5,6\nGPUs are 1,2,2,3,3, respectively. The MST experiments verify the logp cost increase w.r.t GPU\ncounts by evident cost jumps at 3 and 5 GPUs. The data flow of OpenMPI between two GPUs follows\nGPU RAM-+host RAM-\u2014>GPU RAM. The inefficient data flow inside Open MPI contributes to the\nnear linear cost increase with GPU counts p.\nUnder Alg.1, but using different underlying collective algorithms, LP-Alg.1 presents 1.91x and\n1.74x speedup over BE-Alg.1 and MST-Alg.1 in AlexNet, and 1.6x and 1.1x speedup over BE-Alg.1\nand MST-Alg.1 in GoogLeNet. The iteration profiles of these 3 algorithms in Table [2Jindicate the\ncommunication cost of LP-Alg.1 is only 10% of BE-Alg.1, and 11% of MST-Alg.1 in AlexNet; and\n6% of BE-Alg.1, and 43% of MST-Alg.1 in GoogLetNet."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Edgar Gabriel, Graham E Fagg, George Bosilca, Thara Angskun, Jack J Dongarra, Jeffrey M Squyres.\nVishal Sahay, Prabhanjan Kambadur, Brian Barrett, Andrew Lumsdaine, et al. Open mpi: Goals.\nconcept, and design of a next generation mpi implementation. In European Parallel Virtual\nMachine/Message Passing Interface Users\u2019 Group Meeting, pp. 97-104. Springer, 2004.\nQirong Ho, James Cipar, Henggang Cui, Seunghak Lee, Jin Kyu Kim, Phillip B Gibbons, Garth A\nGibson, Greg Ganger, and Eric P Xing. More effective distributed ml via a stale synchronous\nparallel parameter server. In Advances in neural information processing systems, pp. 1223-1231,\n2013.\n\nVYanoninoa liq Byvan Shelhamer Teff Danahne Seroceyv Karavey Tanathan Tango Race Mirchick Serain\nAlekh Agarwal, Olivier Chapelle, Miroslav Dudik, and John Langford. A reliable effective terascale\nlinear learnine svstem. Journal of Machine Learnine Research. 15(1):1111\u20141133. 2014.\nGalen M Shipman, Timothy S Woodall, Richard L Graham, Arthur B Maccabe, and Patrick G Bridge\nInfiniband scalability in open mpi. In Proceedings 20th IEEE International Parallel & Distribute\nProcessing Symposium, pp. 10\u2014pp. IEEE, 2006.\nMartin Zinkevich, Markus Weimer, Lihong Li, and Alex J Smola. Parallelized stochastic gradient\ndescent. In Advances in neural information processing systems, pp. 2595-2603, 2010."}]
rJzaDdYxx
[{"section_index": "0", "section_name": "GRADIENTS OF COUNTERFACTUALS", "section_text": "Mukund Sundararajan, Ankur Taly & Qigi Yan\nfmukunds, ataly, qigqiyan}@google.com\nGradients have been used to quantify feature importance in machine learning mod-\nels. Unfortunately, in nonlinear deep networks, not only individual neurons but\nalso the whole network can saturate, and as a result an important input feature car\nhave a tiny gradient. We study various networks, and observe that this phenomenz\nis indeed widespread, across many inputs.\nWe propose to examine interior gradients, which are gradients of counterfactual\ninputs constructed by scaling down the original input. We apply our method to the\nGoogleNet architecture for object recognition in images, as well as a ligand-based\nvirtual screening network with categorical features and an LSTM based language\nmodel for the Penn Treebank dataset. We visualize how interior gradients better\ncapture feature importance. Furthermore, interior gradients are applicable to a\nwide variety of deep networks, and have the attribution property that the feature\nimportance scores sum to the the prediction score.\nPractitioners of machine learning regularly inspect the coefficients of linear models as a measure o!\nfeature importance. This process allows them to understand and debug these models. The natura\nanalog of these coefficients for deep models are the gradients of the prediction score with respec\nto the input. For linear models, the gradient of an input feature is equal to its coefficient. For deer\nnonlinear models, the gradient can be thought of as a local linear approximation (Simonyan et al\n. Unfortunately, (see the next section), the network can saturate and as a result an important\ninput feature can have a tiny gradient.\nWhile there has been other work (see Section[2.10) to address this problem, these techniques involve\ninstrumenting the network. This instrumentation currently involves significant developer effort be-\ncause they are not primitive operations in standard machine learning libraries. Besides, these tech-\nniques are not simple to understand\u2014they invert the operation of the network in different ways, and\nhave their own peculiarities\u2014for instance, the feature importances are not invariant over networks\nthat compute the exact same function (see Figure{14).\nIn contrast, the method we propose builds on the very familiar, primitive concept of the gradient\u2014al\nit involves is inspecting the gradients of a few carefully chosen counterfactual inputs that are scalec\nversions of the initial input. This allows anyone who knows how to extract gradients\u2014presumab]}\neven novice practitioners that are not very familiar with the network\u2019s implementation\u2014to debu;\n\nthe network. Ultimately, this seems essential to ensuring that deep networks perform predictabl}\nwhen deployed."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Best of all, interior gradients can be computed just as easily as gradients. In\ncontrast, previous methods are complex to implement, which hinders practical\nadoption.\nr Top label: reflex camera\n\nScore: 0.993755\n\n(a) Original image.\nTop label: reflex camera\nScore: 0.996577\n\n(b) Ablated image.\nFigure 1: Pixel importance using gradients at the image."}, {"section_index": "2", "section_name": "2 OUR TECHNIQUE", "section_text": "Let us start by investigating the performance of gradients as a measure of feature importance. We\nuse an object recognition network built using the GoogleNet architecture\na running example; we refer to this network by its codename Inception. (We present applications\nof our techniques to other networks in Section |3|) The network has been trained on the ImageNet\nobject recognition dataset (2015)). It is is 22 layers deep with a softmax layer on\ntop for classifying images into one of the 1000 ImageNet object classes. The input to the network is\na 224 x 224 sized RGB image.\nWe compute the gradients of Incp\u201d (with respect to the image) for the highest-scoring object class,\n\nand then aggregate the gradients vylncp\u201d (img) along the color dimension to obtain pixel importance\nscores.\nVi,j: Prj(img) s= Leetr,c,B}| V Inepy; (img) |\nNext, we visualize pixel importance scores by scaling the intensities of the pixels in the origina\nimage in proportion to their respective scores; thus, higher the score brighter would be the pixel\nFigure |la|shows a visualization for an image for which the highest scoring object class is \u201creflex\ncamera\u201d with a softmax score of 0.9938.\nWe represent a 224 x 224 sized RGB image as a vector in R224*224*3_ Let Incp\u201d ; R224*224x3 _,\n[0, 1] be the function represented by the Inception network that computes the softmax score for the\nobject class labeled L. Let vlncp\u201d (img) be the gradients of Incp\u201d at the input image img. Thus,\nthe vector VIncp\u201d (img) is the same size as the image and lies in R224*?24*3_ As a shorthand, we\nwrite VIncp\u201d. .(img) for the gradient of a specific pixel (i. 7) and color channel c \u20ac {R.G. B}.\n' These pixel importance scores are similar to the gradient-based saliency map defined by|Simonyan et al\n2013) with the difference being in how the gradients are aggregated along the color channel.\nIntuitively, one would expect the the high gradient pixels for this classification to be ones falling\non the camera or those providing useful context for the classification (e.g., the lens cap). However\nmost of the highlighted pixels seem to be on the left or above the camera, which to a human seen\nnot essential to the prediction. This could either mean that (1) the highlighted pixels are somehov\nimportant for the internal computation performed by the Inception network, or (2) gradients of the\nimage fail to appropriately quantify pixel importance.\nLet us consider hypothesis (1). In order to test it we ablate parts of the image on the left and above\nthe camera (by zeroing out the pixel intensities) and run the ablated image through the Inceptior\nnetwork. See Figure [1b] The top predicted category still remains \u201creflex camera\u201d with a softma\u00bb\nscore of 0.9966 \u2014 slightly higher than before. This indicates that the ablated portions are indeec\nirrelevant to the classification. On computing gradients of the ablated image, we still find that mos\nof the high gradient pixels lie outside of the camera. This suggests that for this image, it is in fac\nhypothesis (2) that holds true. Upon studying more images (see Figure[4p, we find that the gradient:\noften fail to highlight the relevant pixels for the predicted object label."}, {"section_index": "3", "section_name": "2.2 SATURATION", "section_text": "In theory, it is easy to see that the gradients may not reflect feature importance if the prediction\nfunction flattens in the vicinity of the input, or equivalently, the gradient of the prediction function\nwith respect to the input is tiny in the vicinity of the input vector. This is what we call saturation,\n\nwhich has also been reported in previous work (Shrikumar et al.](2016),/Glorot & Bengio](2010)).\nWe analyze how widespread saturation is in the Inception network by inspecting the behavior o!\nthe network on counterfactual images obtained by uniformly scaling pixel intensities from zerc\nto their values in an actual image. Formally, given an input image img \u20ac R?24*?24*3, the set of\ncounterfactual images is\nshows the trend in the softmax output of the highest scoring class, for thirty randomly\nchosen images form the ImageNet dataset. More specifically, for each image img, it shows the trend\nin Incp\u2019 (a img) as a varies from zero to one with L being the label of highest scoring object class\nfor img. It is easy to see that the trend flattens (saturates) for all images a increases. Notice that\nsaturation is present even for images whose final score is significantly below 1.0. Moreover, for a\nmajority of images, saturation happens quite soon when a = 0.2.\nIn fact, to our surprise, we found that the saturation is inherently present in the Inception network and\nthe outputs of the intermediate layers also saturate. We plot the distance between the intermediate\nlayer neuron activations for a scaled down input image and the actual input image with respect to\nthe scaling parameter, and find that the trend flattens. Due to lack of space, we provide these plots\nIt is quite clear from these plots that saturation is widespread across images in the Inception network\nand there is a lot more activity in the network for counterfactual images at relatively low values o\n\nthe scaling parameter a. This observation forms the basis of our technique for quantifying featur\nimportance.\nNote that it is well known that the saturation of gradients prevent the model from converging tc\na good quality minima (2010)). So one may expect good quality models tc\nnot have saturation and hence for the (final) gradients to convey feature importance. Clearly, ou\nobservations on the Inception model show that this is not the case. It has good prediction accuracy\nbut also exhibits saturation (see Figure|2). Our hypothesis is that the gradients of important features\nare not saturated early in the training process. The gradients only saturate after the features have\nbeen learned adequately, i.e., the input is far away from the decision boundary.\n{aimg|0<a<1}\nOne may argue that since the output of the Inception network is the result of applying the softmax\nfunction to a vector of activation values, the saturation is expected due to the squashing property of\nthe softmax function. However, as shown in Figure[2b] we find that even the pre-softmax activation\nscores for the highest scoring class saturate.\nWe study the importance of input features in a prediction made for an input by examining the gra.\ndients of the counterfactuals obtained by scaling the input; we call this set of gradients interio1\ngradients.\nInteriorGrads(img) := {VIncp(a@ img) |0 <a <1\nThese interior gradients explore the behavior of the network along the entire scaling curve depicted\nin Figure [2a rather than at a specific point. We can aggregate the interior gradients along the color\ndimension to obtain interior pixel importance scores using equation|1]\nWe individually visualize the pixel importance scores for each scaling parameter a by scaling the\nintensities of the pixels in the actual image in proportion to their scores. The visualizations shov\nhow the importance of each pixel evolves as we scale the image, with the last visualization bein;\nidentical to one generated by gradients at the actual image. In this regard, the interior gradients offe\nstrictly more insight into pixel importance than just the gradients at the actual image.\nFigure[3|shows the visualizations for the \u201creflex camera\u201d image from Figure |1a|for various values of\nthe scaling parameter a. The plot in the top right corner shows the trend in the absolute magnitude\nof the average pixel importance score. The magnitude is significantly larger at lower values of a and\nnearly zero at higher values \u2014 the latter is a consequence of saturation. Note that each visualization\nis only indicative of the relative distribution of the importance scores across pixels and not the\nabsolute magnitude of the scores, i.e., the later snapshots are responsible for tiny increases in the\nscores as the chart in the top right depicts.\nThe visualizations show that at lower values of a, the pixels that lie on the camera are most impor:\ntant, and as a increases, the region above the camera gains importance. Given the high magnitude\nof gradients at lower values of a, we consider those gradients to be the primary drivers of the final\nprediction score. They are more indicative of feature importance in the prediction compared to the\ngradients at the actual image (i.e., when a = 1).\nThe visualizations of the interior pixel gradients can also be viewed together as a single animation\nthat chains the visualizations in sequence of the scaling parameter. This animation offers a concise\n\nyet complete summary of how pixel importance moves around the image as the scaling parameter\nincrease from zero to one.\nRationale. While measuring saturation via counterfactuals seems natural, using them for quanti-\nfying feature importance deserves some discussion. The first thing one may try to identify feature\nimportance is to examine the deep network like one would with human authored code. This seems\nhard; just as deep networks employ distributed representations (such as embeddings), they perform\nconvoluted (pun intended) distributed reasoning. So instead, we choose to probe the network with\nseveral counterfactual inputs (related to the input at hand), hoping to trigger all the internal work-\nings of the network. This process would help summarize the effect of the network on the protagonist\ninput; the assumption being that the input is human understandable. Naturally, it helps to work with\ngradients in this process as via back propagation, they induce an aggregate view over the function\ncomputed by the neurons.\nInterior gradients use counterfactual inputs to artifactually induce a procedure on how the networks\nattention moves across the image as it compute the final prediction score. From the animation,\nwe gather that the network focuses on strong and distinctive patterns in the image at lower values\nof the scaling parameter, and subtle and weak patterns in the image at higher values. Thus, we\nspeculate that the network\u2019s computation can be loosely abstracted by a procedure that first recognize\ndistinctive features of the image to make an initial prediction, and then fine tunes (these are small\nscore jumps as the chart in Figure/3]shows) the prediction using weaker patterns in the image.\nWhile the method of examining gradients of counterfactual inputs is broadly applicable to a wide\nrange of networks, we first explain it in the context of Inception. Here, the counterfactual image\ninputs we consider are obtained by uniformly scaling pixel intensities from zero to their values in\nthe actual image (this is the same set of counterfactuals that was used to study saturation). The\ninterior gradients are the gradients of these images.\nScore for top label\n\nBoo ww\n\nScore for top label\n\nScaling (a) Scaling (a)\n\n(a) Softmax score for top label (b) Pre-softmax score for top lat\n\nFigure 2: Saturation in Inception\n\nTop label: reflex camera\n\nScore: 0.993755\nTop label: reflex camera\n\nRika\n\nScore: 0.993755\nInput image and trend of the pixel importance scores obtained from interior gradients\na= 0.04 a = 0.06 a = 0.08\nFigure 3: Visualization of interior gradients. Notice that the visualizations at lower values of the\nscaling parameter (a) are sharper and much better at surfacing important features of the input image."}, {"section_index": "4", "section_name": "2.4 CUMULATING INTERIOR GRADIENTS", "section_text": "A different summarization of the interior gradients can be obtained by cumulating them. While there\nare a few ways of cumulating counterfactual gradients, the approach we take has the nice attribution\nproperty (Proposition [Ip that the feature importance scores approximately add up to the prediction\nscore. The feature importance scores are thus also referred to as attributions.\nNotice that the set of counterfactual images {a img | 0 < a < 1} fall on a straight line path in\nR?24x224x3_ Interior gradients \u2014 which are the gradients of these counterfactual images \u2014 can\nbe cumulated by integrating them along this line. We call the resulting gradients as integrated\ngradients. In what follows, we formalize integrated gradients for an arbitrary function F\u2019 : R\u201d >\n\n[0, 1] (representing a deep network), and an arbitrary set of counterfactual inputs falling on a path in\npn\nLet x \u20ac R\u201d be the input at hand, and y = (71,.--,n) : [0, 1] + R\u201d be a smooth function specifying\nthe set of counterfactuals; here, +(0) is the baseline input (for Inception, a black image), and +(1)\nis the actual input (for Inception, the image being studied). Specifically, {y(a) | 0 < a < 1} is the\nset of counterfactuals (for Inception, a series of images that interpolate between the black image anc\nthe actual input).\nth\n\nThe integrated gradient along the 2\u00b0\u201d dimension for an input x \u20ac R\u201d is defined as follows.\n1\nIntegratedGrads, (x) w= | one) dule) da\na-0 \u201d\nwhere OF (2) is the gradient of F along the i*\u201d dimension at x.\nA nice technical property of the integrated gradients is that they add up to the difference between the\noutput of F at the final counterfactual +(1) and the baseline counterfactual +(0). This is formalized\nby the proposition below, which is an instantiation of the fundamental theorem of calculus for path\nintegrals.\nFor most deep networks, it is possible to choose counterfactuals such that the prediction at the base-\nline counterfactual is near zero (F'(y(0)) \u00a9 oF] For instance, for the Inception network, the coun-\nterfactual defined by the scaling path satisfies this property as Incp(0??4*?24*3) = 0. In such cases,\nit follows from the Proposition that the integrated gradients form an attribution of the prediction\noutput F'(2), i.e., they almost exactly distribute the output to the individual input features.\nThe additivity property provides a form of sanity checking for the integrated gradients and ensures\nthat we do not under or over attribute to features. This is a common pitfall for attribution schemes\nbased on feature ablations, wherein, an ablation may lead to small or a large change in the prediction\nscore depending on whether the ablated feature interacts disjunctively or conjunctively to the rest of\nthe features. This additivity is even more desirable when the networks score is numerically critical,\ni.e., the score is not used purely in an ordinal sense. In this case, the attributions (together with\nadditivity) guarantee that the attributions are in the units of the score, and account for all of the\nscore.\nWe note that these path integrals of gradients have been used to perform attribution in the context\nof small non-linear polynomials (2011)), and also within the cost-sharing\nliterature in economics where function at hand is a cost function that models the cost of a project\nas a function of the demands of various participants, and the attributions correspond to cost-shares.\nThe specific path we use corresponds to a cost-sharing method called Aumann-Shapley (Aumann &\n\nShapley|(1974)).\nComputing integrated gradients. The integrated gradients can be efficiently approximated by Rie-\nmann sum, wherein, we simply sum the gradients at points occurring at sufficiently small intervals\nalong the path of counterfactuals.\nIntegratedGrads/??\"\" (x Dia oF hain ((\u00ae) - (3)\nHere m is the number of steps in the Riemman approximation of the integral. Notice that the\napproximation simply involves computing the gradient in a for loop; computing the gradient is\ncentral to deep learning and is a pretty efficient operation. The implementation should therefore\n*Formally, this means that the partial derivative of F along each input dimension satisfies Lebesgue\u2019s inte-\ngrability condition, i.e., the set of discontinuous points has measure zero. Deep networks built out of Sigmoids,\nReLUs, and pooling operators should satisfy this condition.\n*We did have trouble finding a baseline couterfactual for an RNN model that simulated the workings of\ntraffic light intersection between a main road and a side street; the naive benchmark counterfactual was one o:\nno traffic at either intersection. But this did not have the lack of semantics that a black image or pure nois\u00ab\nhas for the Inception network. While no interesting labels are activated for the black image supplied to the\nInception network, the same is not true for the \u201cno traffic\u2019 benchmark supplied to the RNN model.\nIntegrated gradients for Inception. We compute the integrated gradients for the Inception network\nusing the counterfactuals obtained by scaling the input image; y(a@) = a img where img is the inpu\nimage. Similar to the interior gradients, the integrated gradients can also be aggregated along the\ncolor channel to obtain pixel importance scores which can then be visualized as discussed earlier\nFigure |4| shows these visualizations for a bunch of images. For comparison, it also presents the\ncorresponding visualization obtained from the gradients at the actual image. From the visualizations\nit seems quite evident that the integrated gradients are better at capturing important features.\nWe discuss two desirable axioms for feature attribution methods. We show that our integrated gradi-\nents method satisfies both. On the other hand, the other feature attribution methods in literature break\none of the two axioms. These methods include DeepLift (2016)), Layer-wise rele-\n\nvance propagation (LRP) (Binder et al.|(2016)), Deconvolutional networks (Zeiler & Fergus|(2014)),\nand Guided back-propagation (Springenberg et al.|(2014))."}, {"section_index": "5", "section_name": "Sensitivity.", "section_text": "Integrated Gradients (ignoring the approximation in computing integrals) satisfies Sensitivity. The\nattribution to the variable is in fact equal to the change in function value (this is a one-variable\ninstance of Proposition|I).\nGradients break Sensitivity due to saturation (see Section (2-2), ie., the prediction function may\nflatten at the input and thus have zero gradient despite the function value at the input being differen\nfrom that at the benchmark. For a concrete example, consider a one variable, one ReLU network\nf(x) = 1\u2014ReLU(1\u2014 x). Suppose we change the input from x = 0 to x = 2. The function change:\nfrom 0 to 1, but because f is flat at 2 = 1, the gradient method gives attribution of 0 to x, violating\nsensitivity. We defer the counterexamples for other methods to Appendix|B}\nImplementation Invariance. Two networks can be functionally equivalent, i.e., their outputs are\nequal for all inputs, despite having very different implementations. We would like our attributior\nmethod to satisfy Implementation Invariance, 1.e., the attributions are always identical for two func-\ntionally equivalent networks. To motivate this, notice that attribution can be colloquially defined as\ndistributing the blame (or credit) for the output to the input features. Such a definition does not refer\nto implementation details. Moreover, the common practice of machine learning tends to evaluate\nthe models from an input-output point of view, where implementations are purely means to an end.\nAttributions generated by integrated gradients (or gradients, or any function of the interior gradients)\nsatisfy Implementation Invariance since they are based only on the gradients of the function repre-\nsented by the network. On the other hand, this fundamental property is unfortunately broken for\nthe DeepLift and LRP methods. Below, we describe intuition for why Implementation Invariance is\nbroken by these methods; a concrete example is provided in Appendix|B]\nFirst, notice that gradients are invariant to implementation. In fact, the chain-rule for gradient\n\n3 = af a is essentially about implementation invariance. To see this, think of g and f as th\n\ninput and output of a system. The gradient of output f to input g can be computed either directly b*\nbe straightforward in most deep learning frameworks. For instance, in TensorFlow (ten), it es-\nsentially amounts to calling t\u00a3.gradients in a loop over the set of counterfactual inputs (i.e.,\n(4) for k = 1,...,m), which could also be batched. Going forward, we abuse the term \u201cinte-\ngrated gradients\u201d to refer to the approximation described above.\nA highly desirable property for feature attributions is Sensitivity. If a non-zero change in a single\ninput variable (holding all other variables fixed) changes the output by a non-zero amount, then this\nvariable should be given a non-zero attribution. In other words, attribution should be sensitive to\nchange.\nAs previously discussed, gradients don\u2019t satisfy sensitivity, and are therefore unsuitable for attribu-\ntion. Methods like DeepLift tackle this issue by introducing a benchmark, and in some sense try to\ncompute \u201cdiscrete gradients\u201d instead of gradients. They use a backpropagation procedure for com-\nposing discrete gradients. Unfortunately, such approaches are problematic because chain rule does\n\nnot hold for discrete gradients in general. Formally fees} Levee) : Me eo) does\n\nnot hold. and therefore these methods fail to satisfv implementation invariance.\nThere are many methods that satisfy Implementation Invariance and Sensitivity. In this section we\nshow that Integrated Gradients is not just one of them. It is in fact also the only method that satisfies\nan extended set of axioms. The additional axioms are reasonably natural but perhaps not as funda.\nmental to attribution. As we shall see in the next section there does not seem to be a perfect empirica\nevaluation for attribution methods. We hope that these axioms provide a theoretical framework fo:\nevaluating attribution methods, which provide a good complement to empirical evaluations.\nAs discussed earlier Integrated Gradients corresponds to a method called Aumann-Shapley studied\nby economists in the context of cost-sharing. (The function at hand is a cost-function whose input\nvariables are demands of different participants and attributions correspond to cost-shares.) Here\n\nis the list of axioms, borrowed from the cost-sharing literature |Billera & Heath| ( a longer\n\ndiscussion of the desirability of these axioms in the context of attribution can be found in |Sun &\n\nSundararajan] (2\nProposition 2 [Billera & Heath\\(1982) Integrated Gradients is the unique method that satisfies all\nWe now discuss an emprical evaluation of integrated gradients as a measure of feature importance.\nusing gradients as a benchmark.\nIf an attribution method fails to satisfy Implementation Invariance, the attributions are potentially\nsensitive to unimportant aspects of the models. For instance, in the example in Section |B] the\nnetwork architecture has more degrees of freedom than needed for representing the function, and\nas a result there are two set of values for the network parameters that lead to the same function.\nThe training procedure can converge at either set of values depending on the initializtion or for\nother reasons, but the underlying network function would remain the same. It is undesirable that\nattributions differ for such reasons.\nDummy: If the function implemented by the deep network does not depend on a variable,\nthen the attribution to it is always zero.\n\nAdditivity: For all inputs, the attributions for a function f; + f2 is the sum of the attributions\nfor the function f; and the function f2.\n\nCompleteness: The attributions add up to the difference between the function values at the\ninput and the benchmark.\n\nScale Invariance: Informally, if the inputs to two networks differ in the scale of one of the\nvariables (say Farenheit and Celsius), but have the same output for corresponding (rescaled)\ninputs, then the attributions should be identical.\n\nProportional Attributions for Homogenous Variables: If a function can be represented by\nthe sum of the two variables, then the two variables should receive attributions proportional\nto their input values.\nPixel ablations. The first evaluation is based on a method by (2015). Here we ablatd\u2019\n\nthe top 5000 pixels (10% of the image) by importance score, and compute the score drop for the\nhighest scoring object class. The ablation is performed 100 pixels at a time, in a sequence of 50\nsteps. At each perturbation step k we measure the average drop in score up to step k. This quantity\nis referred to a area over the perturbation curve (AOPC) by|Samek et al.|(2015).\nLocalization. The second evaluation is to consider images with human-drawn bounding boxe:\naround objects, and compute the percentage of pixel attribution inside the bounding box. We use thi\n2012 ImageNet object localization challenge dataset to get a set of human-drawn bounding boxes\nWe run our evaluation on 100 randomly chosen images satisfying the following properties \u2014 (1\nthe total size of the bounding box(es) is less than two thirds of the image size, and (2) ablating th\nbounding box significantly drops the prediction score for the object class. (1) is for ensuring tha\nthe boxes are not so large that the bulk of the attribution falls inside them by definition, and (2) i:\nfor ensuring that the boxed part of the image is indeed responsible for the prediction score for th\nimage. We find that on 82 images the integrated gradients technique leads to a higher fraction o\nthe pixel attribution inside the box than gradients at the actual image. The average difference in th\npercentage pixel attribution inside the box for the two techniques is 8.4%.\nWhile these results are promising, we note the following caveat. Integrated gradients are meant tc\ncapture pixel importance with respect to the prediction task. While for most objects, one would\nexpect the pixel located on the object to be most important for the prediction, in some cases the\ncontext in which the object occurs may also contribute to the prediction. The cabbage butterfly\nimage from Figure [A]is a good example of this where the pixels on the leaf are also surfaced by the\nintegrated gradients.\nFinally, also note that we did not compare against other whitebox attribution techniques (e.g.,\n\nDeepLift (Shrikumar et al.|(2016))), because our focus was on black-box techniques that are easy to\nimplement, so comparing against gradients seems like a fair comparison."}, {"section_index": "6", "section_name": "2.8 DEBUGGING NETWORKS", "section_text": "\u201cAblation in our setting amounts to zeroing out (or blacking out) the intensity for the R, G, B channels. We\nview thi: a natural mechanism for removing the information carried by the pixel (than, say, randomizing th\npixel\u2019s intensity as proposed by |Samek et al. , especially since the black image is a natural baseline fo\nvision tasks.\nFigure [5] shows the AOPC curve with respect to the number of perturbation steps for integrated\ngradients and gradients at the image. AOPC values at each step represent the average over a dataset\nof 150 randomly chosen images. It is clear that ablating the top pixels identified by integrated\nsradients leads to a larger score drop that those identified by gradients at the image.\nHaving said that, we note an important issue with the technique. The images resulting from pixel\nperturbation are often unnatural, and it could be that the scores drop simply because the network has\nnever seen anything like it in training.\nEyeballing. Ultimately, it was hard to come up with a perfect evaluation technique. So we did spenc\n1 large amount of time applying and eyeballing the results of our technique to various networks\u2014\nhe ones presented in this paper, as well as some networks used within products. For the Inceptior\n1etwork, we welcome you to eyeball more visualizations in Figure[IT]in the appendix and also at\nittps://github.com/ankurtaly/Att ributions| While we found our method to bea\nsradients at the image for the most part, this is clearly a subjective process prone to interpretatior\nund cherry-picking, but is also ultimately the measure of the utility of the approach\u2014debuggins\nnherently involves the human.\nDespite the widespread application of deep neural networks to problems in science and technology,\ntheir internal workings largely remain a black box. As a result, humans have a limited ability to\nunderstand the predictions made by these networks. This is viewed as hindrance in scenarios where\nthe bar for precision is high, e.g., medical diagnosis, obstacle detection for robots, etc.\nQuantifying feature importance for individual predictions is a first step towards understanding the\nbehavior of the network; at the very least, it helps debug misclassified inputs, and sanity check the\ninternal workings. We present evidence to support this below.\nWe use feature importance to debug misclassifications made by the Inception network. In particular\nwe consider images from the ImageNet dataset where the groundtruth label for the image not ir\nthe top five labels predicted by the Inception network. We use interior gradients to compute pixe\nimportance scores for both the Inception label and the groundtruth label, and visualize them to gair\ninsight into the cause for misclassification.\nFigure [6] shows the visualizations for two misclassified images. The top image genuinely has twc\nobjects, one corresponding to the groundtruth label and other corresponding to the Inception label.\nWe find that the interior gradients for each label are able to emphasize the corresponding objects.\nTherefore, we suspect that the misclassification is in the ranking logic for the labels rather than\nthe recognition logic for each label. For the bottom image, we observe that the interior gradients\nare largely similar. Moreover, the cricket gets emphasized by the interior gradients for the mantis\n(Inception label). Thus, we suspect this to be a more serious misclassification, stemming from the\nrecognition logic for the mantis.\nFaithfullness. A natural question is to ask why gradients of counterfactuals obtained by scaling\nthe input capture feature importance for the original image. First, from studying the visualizations\nin Figureli) the results look reasonable in that the highlighted pixels capture features representative\nof the predicted class as a human would perceive them. Second, we confirmed that the network too\nseems to find these features representative by performing ablations. It is somewhat natural to expect\nthat the Inception network is robust to to changes in input intensity; presumably there are some low\nbrightness images in the training set.\nHowever, these counterfactuals seem reasonable even for networks where such scaling does not cor-\nrespond to a natural concept like intensity, and when the counterfactuals fall outside the training set;\nfor instance in the case of the ligand-based virtual screening network (see Sectio: . We speculate\nthat the reason why these counterfactuals make sense is because the network is built by composing\nReLUs. As one scales the input starting from a suitable baseline, various neurons activate, and the\nscaling process that does a somewhat thorough job of exploring all these events that contribute to\nthe prediction for the input. There is an analogous argument for other operator such as max pool,\naverage pool, and softmax\u2014here the triggering events arent discrete but the argument is analogous.\nLimitations of Approach. We discuss some limitations of our technique; in a sense these are\nlimitations of the problem statement and apply equally to other techniques that attribute to base\ninput features."}, {"section_index": "7", "section_name": "2.10 RELATED WORK", "section_text": "e Inability to capture Feature interactions: The models could perform logic that effec-\ntively combines features via a conjunction or an implication-like operations; for instance,\nit could be that a molecule binds to a site if it has a certain structure that is essentially a\nconjunction of certain atoms and certain bonds between them. Attributions or importance\nscores have no way to represent these interactions.\n\ne Feature correlations: Feature correlations are a bane to the understandability of all ma-\nchine learning models. If there are two features that frequently co-occur, the model is free\nto assign weight to either or both features. The attributions would then respect this weight\nassignment. But, it could be that the specific weight assignment chosen by the model is\nnot human-intelligible. Though there have been approaches to feature selection that reduce\n\n)), it is unclear how they apply to deep models on\n\nfeature correlations (Yu & Liu)\n\ndense input.\nOver the last few years, there has been a vast amount work on demystifying the inner workings\nof deep networks. Most of this work has been on networks trained on computer vision, rasks. and\n\ndeals with understanding what a specific neuron computes (Erhan et al.] (2009); |L (2013)) and\ninterpreting the representations captured by neurons during a prediction (Mahendran & Vedaldi\n\n(2015):(Dosovitskiy & Brox!(2015):/Yosinski et al.}(2015)).\nGradient based methods. The first approach is to use gradients of the input features to quantif\n\nfeature importance (Baehrens et al.|(2010);|Simonyan et al. (2013). This approach is the easiest t\n\nimplement. However, as discussed earlier, naively using the gradients at the actual input does no\naccurate quantify feature importance as gradients suffer from saturation.\nSimilar to integrated gradients, the DeepLift and LRP also result in an exact distribution of the\nprediction score to the input features. However, as shown by Figure the attributions are not\ninvariant across functionally equivalent networks. Besides, the primary advantage of our method\nover all these methods is its ease of implementation. The aforesaid methods require knowledge of\nthe network architecture and the internal neuron activations for the input, and involve implementing\na somewhat complicated back-propagation logic. On the other hand, our method is agnostic to the\nnetwork architectures and relies only on computing gradients which can done efficiently in most\ndeep learning frameworks.\nModel approximation based methods. The third approach, proposed first by |Ribeiro et al\n(2016aJb), is to locally approximate the behavior of the network in the vicinity of the input be\ning explained with a simpler, more interpretable model. An appealing aspect of this approach is tha\nit is completely agnostic to the structure of the network and only deals with its input-output behav:\nior. The approximation is learned by sampling the network\u2019s output in the vicinity of the input a\nhand. In this sense, it is similar to our approach of using counterfactuals. Since the counterfactual:\nare chosen somewhat arbitrarily, and the approximation is based purely on the network\u2019s output a\nthe counterfactuals, the faithfullness question is far more crucial in this setting. The method is alsc\nexpensive to implement as it requires training a new model locally around the input being explained\nAs a proof of concept, we apply the technique to the molecular graph convolutions network\n\nof (2016) for ligand-based virtual screening and an LSTM model (Zaremba et al.\n(2014p) for the Ianguage modeling of the Penn Treebank dataset (1993p).\nThe Ligand-Based Virtual Screening problem is to predict whether an input molecule is active\nagainst a certain target (e.g., protein or enzyme). The process is meant to aid the discovery of\nnew drug molecules. Deep networks built using molecular graph convolutions have recently been\n\nproposed by|Kearnes et al.|(2016) for solving this problem.\nOnce a molecule has been identified as active against a target, the next step for medicinal chemists\nis to identify the molecular features\u2014formally, pharmacophore\u2019|that are responsible for the ac-\n\u00b0A pharmacophore is the ensemble of steric and electronic features that is necessary to ensure the a molecule\nis active against a specific biological target to trigger (or to block) its biological response.\nOur work instead focuses on understanding the network\u2019s behavior on a specific input in terms of the\nbase level input features. Our technique quantifies the importance of each feature in the prediction.\nKnown approaches for accomplishing this can be divided into three categories.\nScore back-propagation based methods. The second set of approaches involve back-propagating\nthe final prediction score through each layer of the network down to the individual features. These\n\ninclude DeepLift (Shrikumar et al.|(2016)), Layer-wise relevance propagation (LRP) (Binder et al.\n(2016)), Deconvolutional networks (DeConvNets) (Zeiler & Fergus] (2014)), and Guided back-\npropagation (Springenberg et al.|(2014)). These methods Targely differ in the backpropagation logic\n\nfor various non-linear activation functions. While DeConvNets, Guided back-propagation and LRP\nrely on the local gradients at each non-linear activation function, DeepLift relies on the deviation in\nthe neuron\u2019s activation from a certain baseline input.\nThe technique of quantifying feature importance by inspecting gradients of counterfactual inputs is\ngenerally applicable across deep networks. While for networks performing vision tasks, the coun-\nterfactual inputs are obtained by scaling pixel intensities, for other networks they may be obtained\nby scaling an embedding representation of the input.\nOriginal image Top label an Integrated gradients Gradients at image\n\nviaduct\nFigure 4: Comparing integrated gradients with gradients at the image. Left-to-right: original\ninput image, label and softmax score for the highest scoring class, visualization of integrated gradi-\nents, visualization of gradients at the image. Notice that the visualizations obtained from integrated\ngradients are better at reflecting distinctive features of the image.\nAOPC\n\n06\n\nos\n\no4\n\n03\n\n02\n\no1\n\n00\n\n0\n\n\u2014 Integrated gradients\n\u2014 Gradients at image\n\n2 \u00bb ro\nNumber of perturbation steps\nFigure 5: AOPC (Samek et al. 2015)) for integrated gradients and gradients at image\nFigure 6: Interior gradients of misclassified images. Left-to-right: Original image, Softmax score\nfor the top label assigned by the Inception network and the groundtruth label provided by ImageNet.\nvisualization of integrated gradients w.r.t. Inception label, visualization of integrated gradients w.r.t\nsroundtruth label.\ntivity. This is akin to quantifying feature importance, and can be achieved using the method of\nintegrated gradients. The attributions obtained from the method help with identifying the dominant\nmolecular features, and also help sanity check the behavior of the network by shedding light on\nits inner workings. With regard to the latter, we discuss an anecdote later in this section on how\nattributions surfaced an anomaly in W1N2 network architecture proposed by{Kearnes et al.|(2016).\nThe counterfactual inputs are obtained by scaling down the molecule features down to zero vectors\nie., the set {aFeatures(mol) | 0 < a < 1} where Features(mol) is an encoding of the molecul\ninto atom and atom-pair features.\nThe careful reader might notice that these counterfactual inputs are not valid featurizations ot\nmolecules. However, we argue that they are still valid inputs for the network. First, all opera-\ntors in the network (e.g., ReLUs, Linear filters, etc.) treat their inputs as continuous real numbers\nrather than discrete zeros and ones. Second, all fields of the counterfactual inputs are bounded be-\ntween zero and one, therefore, we don\u2019t expect them to appear spurious to the network. We discuss\nthis further in section\n\u00b0This featurization is referred to as \u201csimple\u201d input featurization in|Kearnes et al.|(201\u00a2\u20ac\nInception label: strainer\n\nScore: 0.594582\n\nInception label: mant\n\nScore: 0.0908096\n\nGroundtruth label: cricket\n\nScore: 0.018476\n\n-\nDefining the counterfactual inputs. The first step in computing integrated gradients is to define\nthe set of counterfactual inputs. The network requires an input molecule to be encoded by hand as\na set of atom and atom-pair features describing the molecule as an undirected graph. Atoms are\nfeaturized using a one-hot encoding specifying the atom type (e.g., C, O, S, etc.), and atom-pairs are\nfeaturized by specifying either the type of bond (e.g., single, double, triple, etc.) between the atoms,\nor the graph distance between them|*|\nBond and O2:pair attribution\nBHEHES go\nFigure 7: Attribution for a molecule under the W2N2 network (2016)). Th\n\nmolecules is active on task PCBA-58432.\nIn what follows, we discuss the behavior of a network based on the W2N2-simple architecture\nproposed by [Kearnes et al.] (2016). On inspecting the behavior of the network over counterfactua\ninputs, we y Reames et | so ere as well. Figure [13a] shows the trend in the softmax score for the\ntask PCBA-588342 for twenty five active molecules as we vary the scaling parameter a from zerc\nto one. While the overall saturated region is small, saturation does exist near vicinity of the inpu\n(0.9 < a < 1). Figure in the appendix shows that the total feature gradient varies significantly\nalong the scaling path; thus, just the gradients at the molecule is fully indicative of the behavior o!\nthe network.\nVisualizing integrated gradients. We cumulate the gradients of these counterfactual inputs t\nobtain an attribution of the prediction score to each atom and atom-pair feature. Unlike imag\ninputs, which have dense features, the set of input features for molecules are sparse. Consequently\nthe attributions are sparse and can be inspected directly. Figure[7|s hows heatmaps for the atom an\natom-pair attributions for a specific molecule.\nWe presented the attributions for 100 molecules active against a specific task to a few chemists.\nThe chemists were able to immediately spot dominant functional groups (e.g., aromatic rings) being\nsurfaced by the attributions. A next step could be cluster the aggregate the attributions across a large\nset of molecules active against a specific task to identify a common denominator of features shared\nby all active molecules.\nIdentifying Dead Features. We now discuss how attributions helped us spot an anomaly in the\nWI1N2 architecture. On applying the integrated gradients method to the W1N2 network, we found\nthat several atoms in the same molecule received the exact same attribution. For instance, for the\nmolecule in Figure [7] we found that several carbon atoms at positions 2, 3, 14, 15, and 16 received\nthe same attribution of 0.0043 despite being bonded to different atoms, for e.g., Carbon at position 3\nis bonded to an Oxygen whereas Carbon at position 2 is not. This is surprising as one would expect\ntwo atoms with different neighborhoods to be treated differently by the network.\nOn investigating the problem further we found that since the W1N2 network had only one convo-\nlution layer, the atoms and atom-pair features were not fully convolved. This caused all atoms that\nMolecule: 101562745, \u2018Atom attribution ow\n\nCRE :\noR ong wl wk IP Tees eat s\n\nOn N ce 3 '\nBEBE ] ee\nMolecule: 101562745,\n\n8S OR UL aN et ol\ner i \u00bb\nm7 N =o\n\nAttribution summary\nSoftmax sore for tack PCBAS#8342: 0.98\n\u2018Atom atrbuton: 0.62 (63%)\n\nBond attribution: 045 (465)\n\nDepa attbution:-0.03 (358)\n\n\u2018Atom attribution\n\ni fo\nUsing the attributions, one can easily identify the atoms and atom-pairs that that have a strongly pos-\nitive or strongly negative contribution. Since the attributions add up to the final prediction score (see\nPropositio {Ip. the attribution magnitudes can be use for accounting the contributions of each fea-\nture. For instance, the atom-pairs that have a bond between them contribute cumulatively contribute\n46% of the prediction score, while all other atom pairs cumulatively contribute \u20143%.\nhave the same atom type, and same number of bonds of each type to contribute identically to th\nnetwork. This is not the case for networks that have two or more convolutional layers.\nDespite the aforementioned problem, the W1N2 network had good predictive accuracy. One hy-\npothesis for this is that the atom type and their neighborhoods are tightly correlated; for instance\nan outgoing double bond from a Carbon is always to another Carbon or Oxygen atom. As a result\ngiven the atom type, an explicit encoding of the neighborhood is not needed by the network. Thi\nalso suggests that equivalent predictive accuracy can be achieved using a simpler \u201cbag of atoms\u2019\ntype model."}, {"section_index": "8", "section_name": "3.2 LANGUAGE MODELING", "section_text": "As in the case of the Inception model, we observe saturation in this LSTM network. To describe\nthe setup, we choose 20 randomly chosen sections of the test data, and for each of them inspect the\nprediction score of the next word using the first 10 words. Then we give each of the 10 input words\na weight of a \u20ac [0,1], which is applied to scale their embedding vectors. In Figure[8] we plot the\nprediction score as a function of a. For all except one curves, the curve starts near zero at a = 0,\nmoves around in the middle, stabilizes, and turns flat around a = 1. For the interesting special case\nwhere softmax score is non-zero at a = 0, it turns out that that the word being predicted represents\nout of vocabulary words. ['h]\n06\n\nos\n\no4\n\n03\n\n02\n\nol\n\n0.0\n\n0.0\n\n02\n\n04\n\nSOIlinar SCOre\n\nalpha\n\n06\n\n08\nFigure 8: Softmax scores of the next word in the LSTM language model (Section|3.2\nIn Table [9] and Table |10] we show two comparisons of gradients to integrated gradients. Due tc\nsaturation, the magnitudes of gradients are so small compared to the prediction scores that it is\ndifficult to make sense of them. In comparison, (approximate) integrated gradients have a total\namount close to the prediction, and seem to make sense. For example, in the first example, the\nintegrated gradients attribute the prediction score of \u201cthan\u2018 to the preceding word \u201cmore\u201d. This\nmakes sense as \u201cthan\u201d often follows right after \u201cmore\u2018 in English. On the other hand, standard\ngradient gives a slightly negative attribution that betrays our intuition. In the second example, in\npredicting the second \u201cual\u201d, integrated gradients are clearly the highest for the first occurrence of\nTo apply our technique for language modeling, we study word-level language modeling of the\nPenn Treebank dataset (Marcus et al. (1993), and apply an LSTM-based sequence model based\non|Zaremba et al.| (2014). For such a network, given a sequence of input words, and the softmax\nprediction for the next word, we want to identify the importance of the preceding words for the\nscore.\nFigure 9: Prediction for than: 0.5307, total integrated gradient: 0.5322\nFigure 10: Prediction for ual: 0.0062, total integrated gradient: 0.0063\n\u201cual\u201d, which is the only word that is highly predictive of the second \u201cual\u201d. On the other hand.\nstandard eradients are not only tiny, but also similar in magnitude for multiple words."}, {"section_index": "9", "section_name": "4 CONCLUSION", "section_text": "We present Interior Gradients, a method for quantifying feature importance. The method can be\napplied to a variety of deep networks without instrumenting the network, in fact, the amount o!\ncode required is fairly tiny. We demonstrate that it is possible to have some understanding of the\nperformance of the network without a detailed understanding of its implementation, opening up the\npossibility of easy and wide application, and lowering the bar on the effort needed to debug deer\nnetworks.\nWe also wonder if Interior Gradients are useful within training as a measure against saturation, o1\nindeed in other places that gradients are used."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Patrick Riley and Christian Szegedy for their helpful feedback on the tech.\nnique and on drafts of this paper.\nAlexander Binder, Gr\u00e9goire Montavon, Sebastian Bach, Klaus-Robert Miiller, and Wojciech Samek.\nLayer-wise relevance propagation for neural networks with local renormalization layers. CoRR,\n2016. URL htt /arxiv.org/abs/1604.00825\nSentence and N minutes after the ual trading\nIntegrated gradients (*le-3) | 0.0707 0.1286 = 0.3619 -\u2014s:11.9796\u2014 -0.0063. 4.1565 (0.2213\nGradients (*1e-3) 0.0066 0.0009 0.0075 0.0678 0.0033 0.0474 0.0184\nSentence (Cont.) halt came news that the ual group\nIntegrated gradients (*le-3) | -0.8501 -0.4271 0.4401 = -0.0919 0.3042\nGradients (*1e-3) -0.0590 -0.0059 0.0511 0.0041 0.0349\nDumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-laye\nfeatures of a deep network. Technical Report 1341, University of Montreal, 2009.\nMitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotat\ncorpus of english: The penn treebank. Computational Linguistics, pp. 313-330, 1993.\n<aren Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Vi\nsualising image classification models and saliency maps. CoRR, 2013. URLjhttp://arxiv.\nora/abs/1312.6034)\nJason Yosinski, Jeff Clune, Anh Mai Nguyen, Thomas Fuchs, and Hod Lipson. Understanding\nneural networks through deep visualization. CoRR, 2015. URL|http://arxiv.org/abs/\nOriginal image\n\nTop label and score Integrated gradients Gradie!\n\nTop label: spiny lo\n\n5 at image\n\nTop label: Rotts\n\nScore: 0.999882\n\nTop label: American coot\n\nScore: 0.999229\n\nabel: traffic light\n\nScore: 0.\n\nTop label: head cabbage\n\nScore: 0.\n\nTop label: manhal\n\nScore: 1.0\n\nTop label:\n\nScore: 0.\n\nTop label: golfcart\n\nScore: 0.\nFigure 11: More visualizations comparing integrated gradients with gradients at the image.\nLeft-to-right: original input image, label and softmax score for the highest scoring class, visualiza-\ntion of integrated gradients, visualization of gradients at the image.\nLayer mixed5b\n\nLayer mixed4b\n\noe\n\na\n\n3\n\nLayer mixed4d\n\ng\n\nLy\n\nt\n\ni TE\n\nLayer mixed3b\nSoftmax score\n\n(a) Softmax score for task\n\n10\n\nTotal gradient\n\n(b) Sum of the feature gradients\nFigure 13: Saturation in the W2N2 network (2016)). Plots for the softmax scot\ne gradients w.r.t. the s\n\nfor task PCBA-58834, and the sum of the featur ame task for twenty molecule\nAll molecules are active against the task\nFigure 12: Saturation in intermediate layers of Inception. For each layer we plot the L2 and\nCosine distance between the activation vector for a scaled down image and the actual input image.\nwith respect to the scaling parameter. Each plot shows the trend for 30 randomly chosen images\nfrom the ImageNet dataset. Notice that trends in all plots flatten as the scaling parameter increases.\nFor the deepest Inception layer mixed5b, the Cosine distance to the activation vector at the image\nis less than 0.01 when a@ > 0.6, which is really tiny given that this layer has 50176 neurons.\n_\u2014_<=_ _\u2014- =>\n\nNetwork f (21,22) Network g(x1, x2)\n\nAttributions at x; = 3,22 = 1 Attributions at 7, = 3,22 = 1\nIntegrated gradients 2; = 2, x2 = \u20141 Integrated gradients 2; = 2, x2 =\nDeepLift a = 1.5, x2 = \u20140.5 DeepLift v=2,22=\n\nLRP 21 =1.5,%2 = -0.5 LRP 21 =2, 2\n\u2014\u2014\u2014 ee\n\nNetwork f (x1, x2)\n\nAttributions at 71 = 3,22 = 1\n\nIntegrated gradients\nDeepLift\nLRP\n\n@, = 1.5, x2\n\nNetwork g(x1, x2)\nAttributions at 71 = 3,72 = 1\n\na =2,%2=-1 Integrated gradients 2, = 2, r2 = \u20141\n\u20140.5 DeepLift a =2,%.=-1\n\u20140.5 LRP a1 =2,22=-1\n\n@, = 1.5, x2\nFigure 14: Attributions for two functionally equivalent networks. The figure shows attributions\nfor two functionally equivalent networks f (21,22) and g(a, 22) at the input 7; = 3, v2 = 1\nusing integrated gradients, DeepLift (Shrikumar et al.| (2016)), and Layer-wise relevance propa-\ngation (LRP) (Binder et al.|(2016)). The reference input for Integrated gradients and DeepLift is\nx, = 0, x2 = 0. All methods except integrated gradients provide different attributions for the two\nnetworks."}, {"section_index": "11", "section_name": "} ATTRIBUTION COUNTER-EXAMPLES", "section_text": "We show that the methods DeepLift and Layer-wise relevance propagation (LRP) break the imple-\nmentation invariance axiom, and the Deconvolution and Guided back-propagation methods break\nthe sensitivity axiom.\nFigure 14 provides an example of two equivalent networks f(x1,xv2) and g(x1,x2) for which\nDeepLift and LRP yield different attributions.\nNow we leverage the above example to show that Deconvolution and Guided back-propagation break\nsensitivity. Consider the network f (21,22) from Figure[14] For a fixed value of x, greater than 1.\nthe output decreases linearly as x2 increases from 0 to 2; \u2014 1. Yet, for all inputs, Deconvolutional\nnetworks and Guided back-propagation results in zero attribution for x2. This happens because for\nall inputs the back-propagated signal received at the node ReLU(x2) is negative and is therefore\nnot back-propagated through the ReLU operation (per the rules of deconvolution and guided back-\npropagation; see (2014) for details). As a result, the feature x2 receives zero\nattribution despite the network\u2019s output being sensitive to it.\nh(x1, 2) = ReLU(x1) \u2014 1 \u2014 ReLU(x2)\nk(a1,@) = ReLU(a, \u2014 1) \u2014 ReLU(22)\nNote that h and k are not equivalent. They have different values whenever x; < 1. But f and g are\nequivalent. To prove this, suppose for contradiction that f and g are different for some 71, 72. Then\nit must be the case that ReLU(x1) \u2014 1 # ReLU(x \u2014 1). This happens only when x; < 1, which\nimplies that f(x1,72) = g(x1,x2) = 0."}]
r1G4z8cge
[{"section_index": "0", "section_name": "MOLLIFYING NETWORKS", "section_text": "Caglar Gulcehre', Marcin Moczulski?*, Francesco Visin?* Yoshua Bengi\n! University of Montreal, * University of Oxford, * Politecnico di Milano\nCaglar Gulcehre!, Marcin Moczulski2\"*, Francesco Visin?:* Yoshua Bengio!\nThe optimization of deep neural networks can be more challenging than the\ntraditional convex optimization problems due to highly non-convex nature of the\nloss function, e.g. it can involve pathological landscapes such as saddle-surfaces\nthat can be difficult to escape from for algorithms based on simple gradient descent\nIn this paper, we attack the problem of optimization of highly non-convex neural\nnetworks objectives by starting with a smoothed \u2014 or mollified \u2014 objective function\nwhich becomes more complex as the training proceeds. Our proposition is inspired\nby the recent studies in continuation methods: similarly to curriculum methods\nwe begin by learning an easier (possibly convex) objective function and let it\nevolve during training until it eventually becomes the original, difficult to optimize\nobjective function. The complexity of the mollified networks is controlled by a\nsingle hyperparameter that is annealed during training. We show improvements\non various difficult optimization tasks and establish a relationship between recent\nworks on continuation methods for neural networks and mollifiers."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In the last few years, deep neural networks \u2014 i.e. convolutional networks (LeCun et al., 1989)\nLSTMs (Hochreiter & Schmidhuber, 1997a) or GRUs (Cho et al., 2014) \u2014 set the state of the art on\na range of challenging tasks (Szegedy et al., 2014; Visin et al., 2015; Hinton et al., 2012; Sutskever\net al., 2014; Bahdanau et al., 2014; Mnih et al., 2013; Silver et al., 2016). However when trained\nwith variants of SGD (Bottou, 1998) deep networks can be difficult to optimize due to their highly\nnon-linear and non-convex nature (Choromanska et al., 2014; Dauphin et al., 2014).\nA number of approaches were proposed to alleviate the difficulty of optimization: addressin\nthe problem of the internal covariate shift with Batch Normalization (loffe & Szegedy, 2015\nlearning with a curriculum (Bengio et al., 2009), recently an approach to train RNNs with diffusio\nprocess (Mobahi, 2016), and graduated optimization (Hazan et al., 2015). The impact of nois\ninjection on the behavior of modern deep learning methods has been explored by Neelakantan et a\n(2015a). Hazan et al. (2015) have shown that injecting a particular noise and scheduling it carefull\ncan guarantee the convergence in O(1/o7e?) steps for e-optimal and o-nice functions. Similar to ov\nwork graduated optimization optimizes a smoothed objective function without performing expensiv\nconvolutions. Injecting noise to the activation functions and scheduling it have been recently show:\nto improve the performance on a wide variety of tasks (Gulcehre et al., 2016).\nWe connect the ideas of curriculum learning and continuation methods with those arising fron\nmodels with skip connections and using layers that compute near-identity transformations. Ski\nconnections allow to train very deep residual and highway architectures (He et al., 2015; Srivastav:\net al., 2015) by skipping layers or block of layers. Similarly, it has been shown that stochasticall\nchanging the depth of a network during training (Huang et al., 2016b) does not prevent convergenc:\nand enables better generalization performance.\nWe discuss the idea of mollification for neural networks \u2014 a form of differentiable smoothing of\nthe loss function connected to noisy activations \u2014 which in our case can be interpreted as a form\nof adaptive noise injection which is controlled by a single hyperparameter. Inspired by Huang et al\n(2016b), we use a hyperparameter to stochastically control the depth of our network. This allows\nus to start the optimization from a convex objective function (as long as the optimized criterion is\nconvex, e.g. linear or logistic regression) and to slowly introduce more complexity into the model\nby annealing the hyperparameter, thus making the network deeper and increasingly non-linear.\n\u201c This work was done while these students were interning at the MILA lab in University of Montreal."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "An important difference of our work compared to injecting noise to the gradients as it is explored ir\n(Hazan et al., 2015; Neelakantan et al., 2015b) is that we inject the noise in the forward computatior\nof the graph and we shape the cost function directly. As a result the cost function for the mollifiec\nnetwork both at the test time and during the training are consistent and this makes the early-stopping\nmuch easier.\nContinuation methods and simulated annealing provide a general strategy to reduce the impact o:\nlocal minima and deal with non-convex, continuous, but not necessarily everywhere differentiable\nobjective functions by smoothing the original objective function and gradually reducing the amoun\nof smoothing during training (Allsower & Georg, 1980) (see Fis. 1).\nIn machine learning, approaches based on curriculum learning (Bengio et al., 2009) are inspired\nby this principle and define a sequence of gradually more difficult training tasks (or training\ndistributions) that eventually converge to the task of interest.\nIn this paper we construct a sequence of smoothed objective functions obtained with a form ot\nmollification and we progressively optimize them. The training procedure iterates over the sequence\nof objective functions starting from the simpler ones \u2014 i.e. with a smoother loss surface \u2014 and moving\ntowards more complex ones until the last, original, objective function is reached.!\n+o0\nLK (0) = (L* K)(0) = | L(O \u2014 7)K(r)dr\nAlthough there are many choices for the function /\u2019(-), we focus on those that satisfy the definitior\nof a mollifier.\nL(A) = tim [e\u201cK(7/0)c(0 \u20147)dr\n\u201cWe plan to release the source code of the models and experiments under, http: //github.com,\ncaglar/molly_nets/.\nFigure 1: A sequence of optimization problems of increasing complexity, where the first ones are\neasy to solve but only the last one corresponds to the actual problem of interest. It is possible to\ntackle the problems in order, starting each time at the solution of the previous one and tracking the\nlocal minima along the way.\nIn the context of stochastic gradient descent, we use a stochastic estimation of the gradient for\nthe smoothed objective function. This is convenient because it may not be analytically feasible to\ncompute the smoothed function, but a Monte-Carlo estimate can often be obtained easily.\nWe smooth the loss function \u00a3, which is parametrized by 0 \u20ac R\", by convolving it with another\nfunction K(-) with stride tT \u20ac R\u201d:\nA mollifier is an infinitely differentiable function that behaves like an approximate identity in the\ngroup of convolutions of integrable functions. If A(-) is an infinitely differentiable function, that\n\nconverges to the Dirac delta function when appropriately rescaled and for any integrable function\nL, then it is a mollifier:\nIf we choose A\u2018 (-) to be a mollifier and obtain the smoothed loss function \u00a3 as in Eqn. 1, we can\ntake its gradient with respect to @ using directly the result from Evans (1998):\nVoL\u00a3x (0) = Vo(L* K)(0) = (L* VK)(@).\nTo relate the resulting gradient VoL x to the gradient of the original function \u00a3, we introduce the\nnotion of weak gradient, i.e. an extension to the idea of weak/distributional derivatives to functions\nwith multidimensional arguments, such as loss functions of neural networks.\nwhere (7) is an infinitely differentiable function vanishing at infinity, C \u20ac [a, b]\" and 7 \u20ac R\u201d\nFor a differentiable almost everywhere function C, the weak gradient g() is equal to VL almost\neverywhere. With a slight abuse of notation we can therefore write:\nVolK(O =~ | voce \u2014T)\nVoly (8) =\u2014 | VoL(@\u2014r)e\u201c*p(r/e)dr\n\n(0\u20147r)], wiht ~ N(0, 2D\nVolLno(@) = E-[ VoL\u00a3(@ \u2014 7) J, witht ~ N(0, o\u00b0D)\nAn intuitive interpretation of the result is that 7 determines the standard deviation of a mollifyin;\nGaussian and is annealed in order to construct a sequence of gradually less \"blurred\" and close\n\u201cWe omit for brevity the algebraic details involved with a translation of the argument.\nFor an integrable function \u00a3 in space \u00a3L \u20ac L((a,b]), g \u20ac L({a,b]\u201d) is a n-dimensional weak\nsradient of L if it satisfies:\nVol\u00a3K (0) = (L* VK)(@) by Eqn. 3\n-[ L(O \u2014T)VK(r)dr\nc\n\n=- [ g(0 \u2014 r)K(r)dr by Eqn. 4\nt is possible to use the standard Gaussian distribution (0, I) as a mollifier K(-), as it satisfies\nhe desired properties: it is infinitely differentiable, a sequence of properly rescaled Gaussian\nlistributions converges to the Dirac delta function and it vanishes in infinity. With such a K\u2019(-) the\nrradient becomes:\nVelKran (0 ~\u2014 [sve \u2014 7 )p(r)dr\n\nE,[ VoLl(@ \u2014 7) J, with r ~ N(0, I)\nExploiting the fact that a Gaussian distribution is a mollifier, we can focus on a sequence of\nmollifications indexed by scaling parameter \u20ac introduced in Eqn. 2. A single element of this sequence\ntakes the following form:\nlim VoLy,o(9) = VoL(@)\no3>0\nThe Monte-Carlo estimators of the mollifiers can be easily implemented with neural networks, where\nthe layers typically have the form:\nwith h'~! a vector of activations from the previous layer in the hierarchy, W! a matrix representing\na linear transformation and f an element-wise non-linearity of choice.\nA mollification of such a layer can be formulated as:\nFrom Eqn. 16, it is easy to see that both weight noise methods proposed by Hinton & van Camf\n(1993) and Graves (2011) can be seen as a variation of Monte-Carlo estimate of mollifiers.\nWe introduce a generalization of the concept of mollifiers that encompasses the approach we explorec\nhere and that is targeted during optimization via a continuation method using stochastic gradien\ndescent.\nImdof =f,\n\no->0\nO(Ls f)(@)\nOx\n\nexists Vz,0 > 0\nIn addition, we consider noisy mollifiers which can be defined as an expected value of a stochastic\nfunction d(x, \u20ac) under some noise source \u20ac with variance a:\nDefinition 2.2. (Noisy Mollifier). We call a stochastic function \u00a2(x, \u20ac,) with input x and noise \u20ac a\nnoisy mollifier if its expected value corresponds to the application of a generalized mollifier T,, as\nper Eqn. 20.\nThe composition of two noisy mollifiers sharing the same a is also a noisy mollifier, since the three\nproperties in the definition (Eqns. 17,18,19) are still satisfied. When o = 0 no noise is injected and\ntherefore the original function will be optimized. If 7 \u2014 oo instead, the function will become an\nidentity function. Thus, for instance, if we mollify each layer of a feed-forward network except the\noutput layer, when a \u2014 oo all the mollified layers will become identity function and the objective\nfunction of the network with respect to its inputs will be convex.\nConsequently, corrupting separately the activation function of each level of a deep neural network\n(but with a shared noise level a) and annealing o yields a noisy mollifier for the objective function\nThis is related to the work of Mobahi (2016), who recently introduced a way of analytically\nsmoothing of the non-linearities to help the training of recurrent networks. The differences of tha\napproach from our algorithm is two-fold: we use a noisy mollifier (rather than an analytic smoothing\nof the network\u2019s non-linearities) and we introduce (in the next section) a particular form of the nois)\nmollifier that empirically proved to work well.\nSo far we obtained the mollified version Lx (@) of the cost function \u00a3(8) by convolving it with a\nmollifier K(@). The kernel (0) corresponds to the average effect of injecting noise \u20ac sampled\nfrom standard Normal distribution. The amount of noise controls the amount of smoothing.\nGradually reducing the noise during training is related to a form of simulated annealing (Kirkpatrick\net al., 1983). Similarly to the analysis in Mobahi (2016), we can write a Monte-Carlo estimate of\nLx (0) = (L* K)(0) = \u00a3 DN, LO \u2014 E\u2122). We provide the derivation and the gradient of this\nequation in Appendix A.\nh! = f((W! \u2014 \u00e9')h'~?), where \u20ac! ~ N (1,07)\nDefinition 2.1. (Generalized Mollifier). A generalized mollifier is an operator, where T, (f) defines\n(To f)(x) = E\u00a2[(2, &5)]\nShaping the cost function to define a sequence of costs that are progressing from easier to more\ndifficult ones can be related to the reward shaping (Ng et al., 1999; Ng, 2003) algorithms. In ou\nalgorithm, we shape the cost and the model architecture itself, rather than rewards or the target:\nin order to make the optimization easier. In that sense, reward shaping can be considered to be mor\ncloser to curriculum learning."}, {"section_index": "3", "section_name": "3. METHOD", "section_text": "We use a noisy mollifier based on our definition in Section 2.4. Instead of convolving the objective\nfunction with a kernel:\nTo decide which path to take, for each unit in the network, a binary stochastic decision is taken by\ndrawing from a Bernoulli distribution with probability dependent on the decaying value of p!:\nx! ~ Bernoulli(p\u2019)\nIf the number of hidden units of layer / \u2014 1 and layer /+ 1 is not the same, we can either zero-pad laye\n1\u20141 before feeding it into the next layer or apply a linear projection to obtain the right dimensionality\nFor p! = 1, the layer computes the identity function leading to a convex objective. If p\u2019 = 0 the\nlayer computes the original non-linear transformation unfolding the full capacity of the model. We\ncall the connections that are introduced this way as unitwise stochastic skip connections(USSC).\nThe pseudo-code for the mollified activations is reported in Algorithm 1.\nWe propose an algorithm to mollify the cost of a neural network which also addresses an important\ndrawback of the previously proposed noisy training procedures: as the noise gets larger, it can\ndominate the learning process and lead the algorithm to perform a random walk on the energy\nlandscape of the objective function. Conversely in our algorithm, as the noise gets larger gradient\ndescent minimizes a simpler (e.g. convex) but still meaningful objective function.\nWe define the desired behavior of the network in the limit cases where the noise is very large or\nvery small, and modify the model architecture accordingly. Specifically, during training we minimize\na sequence of increasingly complex noisy objectives L = (\u00a31(0; \u00a34,), \u00a37 (0; \u00a3o2),\u00b0++ ,L*(0; &o,))\nthat we obtain by annealing the scale (variance) of the noise o;. Let us note that our algorithm\nsatisfies the fundamental properties of the generalized and noisy mollifiers that we introduced earlier.\n1. We start the training by optimizing a convex objective function that is obtained by configuring\nall the layers between the input and the last cost layer to compute an identity function, i.e., by\nskipping both the affine transformations and the blocks followed by nonlinearities.\n\n2. During training, the magnitude of noise which is proportional to p is annealed, allowing to\ngradually evolve from identity transformations to linear transformations between the layers.\n\n3. Simultaneously, as we decrease the p, the noisy mollification procedure allows the element-wise\nactivation functions to gradually change from linear to be nonlinear."}, {"section_index": "4", "section_name": "| SIMPLIFYING THE OBJECTIVE FUNCTION FOR FEEDFORWARD NETWORKS", "section_text": "For every unit of each layer, we either copy the activation (output) of the corresponding unit of\nthe previous layer (the identity path in Figure 2) or output a noisy activation h! of a non-linear\ntransformation of it (h'~!, \u20ac; W'), where \u20ac is noise, W\" is a weight matrix applied on h\u2019~! and\na is a vector of binary decisions for each unit (the convolutional path in Figure 2):\np(n\u2019? i\nEn Ww! ey\ni ,\u20ac,W!\nwT oO\nmy hob +\no(hi-t on\n,\u20ac,7'; W') en\nin DropIn layers (Smith et al., 2016) a binary random vector is sampled from a Bernoulli distribution\no decide whether to introduce skip a connection from the layer below | \u2014 1 for each layer / and\nhey use this as a regularization. As opposed to the USSC, DropIn layers do not necessarily do not\nrecessarily achieve a convex objective function as the DropIn ratio (p\u2019) increases.\nn't\nFn)\nwf\nhit\nfi(n't)\nwf\nFigure 2: Top: Stochastic depth. Bottom: mollifying network. The dashed line represents the\noptional residual connection. In the top path, the input is processed with a convolutional block\nfollowed by a noisy activation function, while in the bottom path the original activation of the layer\n| \u2014 1 is propagated untouched. For each unit, one of the two paths in picked according to a binary\nstochastic decision 7r.\nAlgorithm 1 Activation of a unit 7 at layer J."}, {"section_index": "5", "section_name": ") LINEARIZING THE NETWORK", "section_text": "We address this issue by bounding the element-wise activation function f(-) with its linear\napproximation when the variance of the noise is very large, after centering it at the origin. The\nresulting function f*(-) is bounded and centered around the origin.\nNote that centering the sigmoid or hard-sigmoid will make them symmetric with respect to the origin.\nWith a proper choice of the standard deviation o(h), the noisy activation function becomes a linear\nfunction of the input when p is large, as illustrated by Figure 10.\nLet u*(a) = u(x)\u2014u(0), where u(0) is the offset of the function from the origin, and x; the i-th dimen-\nsion of an affine transformation of the output of the previous layer h\u2019-!: x, = w! h\u2019-! + b;. Then:\nw(a;, 6; w;) = sgn(u*(x;))min(|u*(2x;)|, |f (av) + sgn(u*(a;))|s;||) + u(0\nWe have a simpler form of the equations to linearize ReLU (Nair & Hinton, 2010) activation function\nwhen p! \u2014> oo. Instead of the complicated Egn. 23. We can use a simpler equation as in Egn. 26 to\nIn Section 2, we show that convolving the objective function with a particular kernel can be\napproximated by adding noise to the activation function. This method may suffer from excessive\nrandom exploration when the noise is very large.\ns; ~ N(0, pco(a:))\n\u2014 abs(sigmoid(x) - 0.5 \u2014 sigmoi\n\u2014 abs(0.25x) \u2014 0.\n\na) b)\n\u2014 abs(sigmoid(x) - 0.5)\n\u2014 abs(0.25x)\nFigure 3: The figures show how to evolve the model to make it closer to a linear network. Arrows\ndenote the direction of the noise pushing the activation function towards the linear function. a) The\nquasi-convex envelope established by a |sigmoid(-)| around |0.252|. b) A depiction of how the noise\npushes the sigmoid to become a linear function.\nachieve the linearization of the activation function when we have a very large noise in the activatior\nfunction:\n8; = minimum((z;|, po(x;)|\u20ac|)\n\nO(a, &, wi) = f(ai) \u2014 8;\nFor GRUs we set the update gate to = \u2014 where t is the time-step index \u2014 and reset the gate to 1 if\nthe noise is very large, using ao hin 1. Similarly for LSTMs, we can set the output gate to 1\nand input gate to + and forget gate to 1 \u2014 + when the noise is very large. The output gate is 1 or\nclose to 1 when the noise is very large. This we the LSTM will behave like a BOW model. In order\nto achieve this behavior, the activations u)(x;,, \u20ac;) of the gates can be formulated as:\nWe provide the derivation of Eqn. 28 in Appendix B. The gradient of the Eqn. 28 will be :\nMonte-Carlo approximation to the gradient of f(x!).\nWe used a different schedule for each layer of the network, such that the noise in the lower layers will\nanneal faster. This is similar to the linearly decaying probability of layers in Huang et al. (2016b).\nExponential Decay In our experiments, we focused on using an annealing schedule similar to\nthe inverse sigmoid rule in Bengio et al. (2015) with p!,\nPp =\n\nl-e\u201d\n\nkuyl\nTL\nIn a similar vein it is possible to smooth the objective functions of LSTM and GRU networks by\nstarting the optimization procedure with a simpler objective function such as optimizing a word2vec.\nBoW-LM or CRF objective function at the beginning of training and gradually increasing the\ndifficulty of the optimization by increasing the capacity of the network.\nw(a\nno) =\nf(a + pla(a)\nx)\\E|)\nBy using a particular formulation of o(2:) that constraints it to be in expectation over \u20ac when p' = 1,\nwe can obtain a function for 7 \u20ac R within the range of f(-) that is discrete in expectation, but still\nper sample differentiable:\nt\n\nf-\n\n_ft@-\nEe [|\u20ac|]\n\n1\nwith hyper-parameter k > 0 at \u00a2*\u201d update for the 1\u2019\u201d layer, where L is the number of layers of the\nmodel. We stop annealing when the expected depth p, = ean p!, reaches some threshold 6. In\nour experiment we set v; to be a moving average of the loss? of the network, but for some of our\nexperimnets that resulted unstable behaviors in the training and thus we have to fix v; to 1. An\n\nadvantage of using running average of loss for the v; is because the behavior of the loss/optimization\ncan directly influence the annealing behavior of the network. Because we will have:\nlim pi =1 and, lim ph =0.\nVio vi70\nWe have compared the plot of different annealing methods described in this paper as in Figure 4.\n1.0\n\n\u2014\u2014 sqrt annealing\n\n\u2014\u2014 linear annealing\n\u2014 exp decay k=100\nexp decay k=50\nexp decay k=10\n\n0.8\n\n0.6\n0.4\n0.2\n\n0.0\nie) 100 200 300 400 50\u00a2\nFigure 4: We compare different annealing schedules with respect to the time in y-axis(iterations)."}, {"section_index": "6", "section_name": "8 EXPERIMENTS", "section_text": "In this section we mainly focus on training of difficult to optimize models, in particular deep MLPs\nwith sigmoid or tanh activation functions. The details of the experimental procedure is provided\nin Appendix C.\nWe train a thin deep neural network on MNIST (LeCun & Cortes, 1998) dataset with 72 hidden layers\nand 100 hidden units. We train our model with Adam(Kingma & Ba, 2014) optimizer and fix the\nThis has a desirable property: when the training-loss is high, the noise injected into the system will\nbe large as well. As a result, the model is encouraged to do more exploration, while if the model\nconverges the noise injected into the system by the mollification procedure will be zero.\nFurthermore, in our experiments we observe that training with noisy mollifiers can potentially be\nhelpful for the generalization. This can be due to the noise induced to the backpropagation through\nthe noisy mollification, that makes SGD more likely to converge to a flatter-minima (Hochreiter\n& Schmidhuber, 1997b) because the noise will help it escape from sharper local minima.\n\u00b0Depending on whether the model overfits or not, this can be a moving average of training or validation lo:\nlearning rate of all the models to 3e\u20144. We have used the same learning-rate for all the models in order\nto factor out the possibility that a model converges faster, due to the fact of using a larger learning rate\nFirstly, in Figure 5, we investigate the effect of using different annealing schedules. Exponentia\ndecay converges faster compared to linear decay and square root decay of p. We find it to be very\nunstable to train our model with linear and square root decay in particular for large c values. Thus we\nhave to use smaller c value (20 instead of 100) to be able to train the model with causing it to diverge\nNLL\n\n101\n\n10\u00b0\n\n107\n\n102\n\n103\n\n10-4\n\n100\n\n200\n\n\u2014\u2014 exponential decay\n\u2014\u2014 linear decay\n\n\u2014\u2014 sqrt decay\n\n300 400\n\n506\nNLL\n\n10\u00b0\n\n107\n\n102\n\n103\n\n10-4\n\n100\n\nENN DIL IAAL\n\u2014\u2014 linear decay\n\u2014\u2014 sqrt decay\n\n200 300 400\n\nIterations\n\n500\nFigure 5: We compare the training performance of different types of annealing methods used with\nmollification procedure to anneal the parameter of the mollification p. Decaying the p exponentially\nachieves both better training and validation performance.\n\u2014*\u2014 train exponential decay det\n10\u00b0 \u2014\u2014 train exponential decay sto\n\n101 \u00b0\n\nNLL\n\n102\n\n103\n\n10-4\n0 100 200 300 400 501\n\nIterations\nam UAL CAVUNETIUAl Ulvay UCL\n10\u00b0 \u2014\u2014 train exponential decay sto\n\n0 100 200 300 400 500\n\nferations\nFigure 6: We show the learning curves of the model where we don\u2019t inject the noise during the\ntraining and instead use the deterministic approximation for the mollification during the training as\nwell. The differerence in terms of speed of learning is very small.\nWe have tried to run experiment with the Monte-Carlo approximation of the mollification which is\nderived in Appendix A, however when we start with large noise and anneal the noise during the train-\ning, the model was very unstable and training was diverging. If we start with small noise and anneal\nthe magnitude of the noise during the training, we could not observe any effect of it on the training.\nIn Figure 6, we show the effect of using the noisy training procedure that we have introduced by sam-\npling a mask from Bernoulli and Gaussian distributions versu using the deterministic approximation\nof this noisy procedure which we also use for the test time but during the training as well.\nIn Figure 7, we compare the results obtained for the model using mollification obtained with or\nwithout batch normalization and feed-forward residual networks. The mollified model performs very\nclosely to the MLP trained with residual connections and the batchnorm. However, using residual\nconnections and batch-norm does not seem to improve the results.\n102\n\u2014e\u2014 mollified net with batchnorm\n\u2014\u2014 mollified net without batch norm\n107 \u2014\u2014 resnet model\n\n10\u00b0\n\n10-2\n4\nz\n\n103\n\n104\n\n105\n10-6\nie) 100 200 300 400 50(\n\nIterations\noe PEROT Pet VWI QC IOri\n\u2014\u2014 mollified net without batch norm\n107 \u2014\u2014 resnet model\n\n10\u00b0\n\n_, 102\n2 103\n104\n105\n\n10-6\nie) 100 200 300 400 500\n\nIterations\n\u2014 Sayers Malied Signal MLP\n SiajereRewelia Sind MLP Wi Sater Normanton\n Siajertsamaa ir\n\u2018ran NL\n\n\u2014 Fiayeee Matted Signed WLP\n\n SiajereRewelia Sind MLP Wi Sater Normanton\n\n Siajertsamaa ir\n\n250 updates\nFigure 8: The learning curves of a 6-layers MLP *\nwith sigmoid activation function on 40 bit parity 1\ntask.\nDeep Parity Experiments Training neural networks on a high-dimensional parity problem can\nbe challenging (Graves, 2016; Kalchbrenner et al., 2015). We experiment on forty dimensional (bits)\nparity problem with 6-layer MLP using sigmoid activation function. All the models are initialized\nwith Glorot initialization Glorot et al. (2011) and trained with SGD with momentum. We compare\nan MLP with residual connections using batch normalization and a mollified network with sigmoid\nactivation function. As can be seen in Figure 8, the mollified network converges faster.\nDeep Pentomino Pentomino is a toy-image dataset where each image has 3 Pentomino blocks\nThe task is to predict whether if there is a different shape in the image or not (Giilgehre & Bengio\n2013). The best reported result on this task with MLPs is 68.15% accuracy (Gulcehre et al., 2014)\nThe same model as ours trained without noisy activation function and vanilla residual connections\n\nscored 69.5% accuracy, while our mollified version scored 75.15% accuracy after 100 epochs of\ntraining on the 80k; dataset.\nCIFAR10 We experimented with deep convolutional neural networks of 110-layers with residual\nblocks and residual connections comparing our model against ResNet and Stochastic depth. We\nadapted the hyperparameters of the Stochastic depth network from Huang et al. (2016a) and we used\nthe same hyperparameters for our algorithm. We report the training and validation curves of the\nthree models in Figure 10 and the best test accuracy obtained early stopping on validation accuracy\nover 500 epochs in Table 1. Our model achieves better generalization than ResNet. Stochastic depth\nachieves better generalization, but it might be possible to combine both and obtain better results.\nFigure 7: We investigate the effect of using batch norm and residual connection for mollifcation and\ncompare against to the network with residual connections and batch-norm. The effect of batch norm\non this task for mollification seems to be very small and training convergence performance of the all\nthe approaches are very close to each other.\nTest Accuracy\n\nStochastic Depth\nMollified Convnet\n\n93.25\n92.45\n91.78\nTable 1: CIFAR10 deep convolution\nneural network.\nFigure 9: The training curve of a bidirectional.\nRNN that predicts the embedding corresponding\n\u2018\u00a9 a sequence of characters.\nPredicting the Character Embeddings from Characters Learning the mapping from sequence:\nof characters to the word-embeddings is a difficult problem. Thus one needs to use a highly non-linea\nfunction. We trained a word2vec model on Wikipedia with embeddings of size 500 (Mikolov et al\n2014) with a vocabulary of size 374557.\nLSTM Language Modeling We evaluate our model on LSTM language modeling. Our baseline\nmodel is a 3-layer stacked LSTM without any regularization. We observed that mollified model con-\nverges faster and achieves better results. We provide the results for PTB language modeling in Table 2."}, {"section_index": "7", "section_name": "10 CONCLUSION", "section_text": "We propose a novel method for training neural networks inspired by an idea of continuation\nsmoothing techniques and recent advances in non-convex optimization algorithms. The methoc\nmakes learning easier by starting from a simpler model, solving a well-behaved problem, anc\ngradually transitioning to a more complicated setting. We show improvements on very deep models\ndifficult to optimize tasks and compare with powerful techniques such as batch-normalization anc\nresidual connections. We also show that the mollification procedure improves the generalizatior\nperformance of the model on two tasks."}, {"section_index": "8", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Nicholas Ballas and Misha Denil for the valuable discussions and their feedback. We\nwould like to also thank the developers of Theano 4, for developing such a powerful tool for scientific\ncomputing Theano Development Team (2016). We acknowledge the support of the following\norganizations for research funding and computing support: NSERC, Samsung, Calcul Qu\u00e9bec\nCompute Canada, the Canada Research Chairs and CIFAR."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "al- Table 2: 3-layered LSTM net-\nng work on word-level language\nmodeling for PTB.\nOur future work includes testing this method on large-scale language tasks that require long training\ntime, e.g., machine translation and language modeling. Moreover, (Kaiser & Sutskever, 2015)\nobserved that the training of Neural-GPU model can be improved significantly by using gradient noise\nwhich can be related to the smoothing of the loss surface, it would be interesting to try mollification on\nthis model to see if the training of Neural GPU can be made easier by using mollification procedure.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly\nlearning to align and translate. arXiv preprint arXiv: 1409.0473, 2014.\nYoshua Bengio, J\u00e9r\u00e9me Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Pro\nceedings of the 26th annual international conference on machine learning, pp. 41-48. ACM, 2009\nL\u00e9on Bottou. Online algorithms and stochastic approximations. In David Saad (ed.), Online Learnin\nin Neural Networks. Cambridge University Press, Cambridge, UK, 1998.\nKyunghyun Cho, Bart Van Merri\u00e9nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares,\nHolger Schwenk, and Yoshua Bengio. Learning phrase representations using mn encoder-decoder\nfor statistical machine translation. arXiv preprint arXiv: 1406.1078, 2014.\nAnna Choromanska, Mikael Henaff, Michael Mathieu, G\u00e9rard Ben Arous, and Yann LeCun. The\nloss surface of multilayer networks, 2014.\nYann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshuz\nBengio. Identifying and attacking the saddle point problem in high-dimensional non-conve>\noptimization. In NJPS\u20192014, 2014.\nAlex Graves. Practical variational inference for neural networks. In Advances in Neural Information\nProcessing Systems, pp. 2348-2356, 2011.\nElad Hazan, Kfir Y Levy, and Shai Shalev-Shwartz. On graduated optimization for stochastic\nnon-convex problems. arXiv preprint arXiv: 1503.03712, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image\nrecognition. arXiv preprint arXiv: 1512.03385, 2015.\nSepp Hochreiter and Jiirgen Schmidhuber. Flat minima. Neural Computation, 9(1):1\u201442, 1997b\nYann LeCun and Corinna Cortes. The mnist database of handwritten digits, 1998.\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. word2vec, 2014.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, and Daar\nWierstra. Playing atari with deep reinforcement learning. Technical report, arXiv: 1312.5602, 2012\nAndrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations:\nTheory and application to reward shaping. In JCML, volume 99, pp. 278-287, 1999.\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,\nJulian Schrittwieser, loannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering\nthe game of go with deep neural networks and tree search. Nature, 529(7587):484\u2014489, 2016.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov\nDumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolution:\nTechnical report, Google, 2014.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin\narXiv: 1412.6980, 2014.\noO. AIKpAallIcK, \u00a9. UV. UClall JT, , ANG ivi. PF. VECCIH. Upuliization DY sunulated abhealne. 220.\n671-680, 1983.\n\nY. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D.\nJackel. Backpropagation applied to handwritten zip code recognition. Neural Comput., 1\n(4):541-551, December 1989. ISSN 0899-7667. doi: 10.1162/neco.1989.1.4.541. URL\nhttp: //dx.doi.ora/10.1162/neco.1989.1.4.541.\nRupesh K Srivastava, Klaus Greff, and Jiirgen Schmidhuber. Training very deep networks. In\nAdvances in Neural Information Processing Systems. pp. 2368-2376. 2015.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.\nIn Advances in neural information processing systems. pp. 3104\u20143112. 2014."}, {"section_index": "10", "section_name": "Appendix", "section_text": "VK\\G) = (AKIN) O) = : AVG \u2014 CJA NG Jag Which Can be esumated DY a ivionte U\n\nN\nwore ; where \u20ac \u00a9 is a realization of the noise random variable \u00a2\n\nOLK (9)\n00\n\n1S ace - \u20ac)\n\u201cN > 00 .\n\nyielding\nTherefore introducing additive noise to the input of \u00a3(@) is equivalent to mollification\nAssume that z! = x! + pla(x)|\u00e9!| and Eg\u00a2[ab(a}, \u20ac)] = t. Thus for all z!,\nBelt. &1)] = Bell),\n\nt = E\u00a2[f(z{)], assuming f(-) behaves similar to a linear functio\n[zt\n1\nt.\n\nEg|f(z})] \u00a9 f(Ee[z{]) since we use hard-sigmoid for f(-) this will hold.\nf(t) \u00a9 Ee(2{]\nf(t) = a! + plo(a) Eee!)\nCorollary, the value that \u00a2(z!) should take in expectation for p} = 1 would be\nIn our experiments for f(-) we used the hard-sigmoid activation function. We used the following\n\npiecewise activation function in order to use it as f'(x) = 4(a \u2014 0.5). During inference we use\nthe expected value of random variables 7 and \u20ac.\nThe weights of the models are initialized with Glorot & Bengio initialization Glorot et al. (2011).\nWe use the learning rate of 4e \u2014 4 along with RMSProp. We initialize a; parameters of mollified\n\nactivation function by sampling it from a uniform distribution, U[\u20142, 2]. We used 100 hidden units\nat each layer with a minibatches of size 500.\n-(w(2t, &)] = Belf(z))].\n\nt = E\u00a2[f(z{)], assuming f(-) behaves similar to a linear function:\n[zt\n1\nzy\n\nEg|f(z})] \u00a9 f(Ee[z{]) since we use hard-sigmoid for f(-) this will hold.\nf(t) = Eefz!]\nWe train a 6\u2014layer MLP with sigmoid activation function using SGD and momentum. We usec\n200 units per layer with sigmoid activation functions. We use a learning rate of le \u2014 3."}, {"section_index": "11", "section_name": "C.3. CIFARIO", "section_text": "10\u00b0\n\u2014\u2014 validation losses for mollified convnet\n\u2014\u2014 validation losses for stochastic depth\n\n1 validation losses for resnet\n\n0 100 200 300 400 500\n\na)\n\n\u2014\u2014 train losses for mollified convnet\n\u2014\u2014 train losses for stochastic depth\n\u2014\u2014 train losses for resnet\n\n10\n\n0 100 200 300 400 500\n\nb)\n101\n\n10\n\n10\"\n\n\u2014\u2014 validation losses for mollified convnet\n\u2014\u2014 validation losses for stochastic depth\n\u2014\u2014 validation losses for resnet\n\n0 100 200 300 400 500\n\n\u2014\u2014 train losses for mollified convnet\n\u2014\u2014 train losses for stochastic depth\nPr) \u2014\u2014 train losses for resnet\n\n10\"\n\n10?\n0 100 200 300 400 500\nFigure 10: Training and validation losses over 500 epochs of a mollified convolutional network\ncomposed by 110-layers. We compare against ResNet and Stochastic depth."}, {"section_index": "12", "section_name": "C.4 PARITY", "section_text": "The n-dimensional parity task is the task to figure out whether the sum of n-bits in a binary vectot\nis even or odd. We use SGD with Nesterov momentum and initialize the weight matrices by using\nGlorot&Bengio initializationGlorot et al. (2011). For all models, we use the learning rate of le \u2014 3\n\nand momentum of 0.92. a; is the parameters of mollified activation function are initialized by\nsampling from uniform distribution, U{\u20142, 2]."}, {"section_index": "13", "section_name": "C.5 LSTM LANGUAGE MODELING", "section_text": "We trained 2-layered LSTM language models on PTB word-level. We used the models with the\nsame hyperparameters as in Zaremba & Sutskever (2014). We used the same hyperparameters fo\nboth the mollified LSTM language model and the LSTM. We use hard-sigmoid activation functiot\nfor both the LSTM and mollified LSTM language model. We use hard-sigmoid activation functiot\nfor the gates of the LSTM.\nWe use 10k of these words as a validation and another 10k word embeddings as test set. We\ntrain a bidirectional-LSTM on top of each sequence of characters for each word and on top of the\nrepresentation of bidirectional LSTM, we use a 5-layered tanh-MLFP to predict the word-embedding\nWe train our models using RMSProp and momentum with learning rate of 6 \u2014 4 and momentun\n).92. The size of the minibatches, we used is 64. As seen in Figure 9, mollified LSTM networl\nconverges faster.\nWe use the same model with the same hyperparameters for both ResNet, mollified network and the\nstochastic depth. We borrowed the hyperparameters of the model from Huang et al. (2016a). Our\nmollified convnet model has residual connections coming from its layer below."}]
HyNxRZ9xg
[{"section_index": "0", "section_name": "CAT2VEC: LEARNING DISTRIBUTED REPRESENTA-\nTION OF MULTI-FIELD CATEGORICAL DATA", "section_text": "Ying Wen, Jun Wang\n{ying.wen, jun.wang}@cs.ucl.ac.uk"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "There are different abstraction levels within data. For the low-abstraction continuous sensory date\n(such as images, videos, and audio) directly acquired from the physical world, quite often, the strong\ncorrelations (local patterns) are known a priori within the data. As such, one can directly embed the\nprior knowledge into a ions ACTRESS model such as neural networks to apm aticaly distil such pat:\nterns and perform predictions (Krizhevsky et al) 2012) [Graves et a1] 2013 2013). However, on the othe:\nhand, for high-abstraction data from our social and business activities, such as natural language anc\ntransnational log data, the data is commonly discrete and contains atomic symbols, whose meaning\nand correlation are unknown a priori. A typical solution is to employ embedding techniques (Bengic\n\net al.||2003}|Mikolov et al.|/2013) to map the discrete tokens into a (low-dimensional) continuous\n\nspace and further build neural networks to learn the latent patterns.\nMulti-field categorical data is a type of high-abstraction data where the categories in each field are\nheterogeneous with those in other fields. Such a type of data is very widely used in data mining\ntasks based on transaction logs from many social or commercial applications, such as recommender:\nsystems, social link prediction, and computational advertising. Table[T] gives an example of multi.\nfield categorical data in user behaviour targeting where we observe user browsing patterns, and giver\nthose multi-field categorical features, a common task is to predict their actions such as clicks anc\n\nconversions (Zhang et al. 2014} Liao et al. 2014} Yuan et al.| 2013).\nAs there is no explicit dependency among these inter-field categories, two solutions are mainly\nused for building machine learning models that extract the local patterns of the data and make\ngood predictions. The first solution is to create combining features across fields, such as\nSuch feature engineering is ex-\nfunctions\n\nCiTy:SHANGHAI& WEEKDAY:FRIDAY (Chapelle et al.\n\npensive on human efforts and feature/parameter space. The second solution is to bui\n\n(Rendle| ) or neural networks based on the feature embeddings (Zhang et al.\n\nsolutions are of low efficiency because of the brute-force feature engineering or aimless embedding\ninteractions.\nTianyao Chen, Weinan Zhang\nYing Wen, Jun Wang \u2018ianyao Chen, Weinan Zhang\nUniversity College London, UK Shanghai Jiao Tong University\nMediaGamma Ltd, UK Shanghai, China\n\n{ying.wen, jun.wang}@cs.ucl -ac.uk {tychen, wnzhang}@apex. sjtu.edu.cn\n{tychen, wnzhang}@apex.sjtu.edu.cn"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "This paper presents a method of learning distributed representation for multi-field\ncategorical data, which is a common data format with various applications such\nas recommender systems, social link prediction, and computational advertising.\nThe success of non-linear models, e.g., factorisation machines, boosted trees, has\nproved the potential of exploring the interactions among inter-field categories.\nInspired by Word2Vec, the distributed representation for natural language, we\npropose Cat2Vec (categories to vectors) model. In Cat2Vec, a low-dimensional\ncontinuous vector is automatically learned for each category in each field. The\ninteractions among inter-field categories are further explored by different neural\ngates and the most informative ones are selected by pooling layers. In our exper-\niments, with the exploration of the interactions between pairwise categories over\nlayers, the model attains great improvement over state-of-the-art models in a su-\npervised learning task, e.g., click prediction, while capturing the most significant\ninteractions from the data.\nTable 1: A simple example of multi-field categorical data from iPin You dataset (Liao et al.|[2014)\n\nTARGET | RENDER WEEKDAY CITY Rreonweer\nIn this paper, we propose an unsupervised pairwise interaction model to learning the distributed rep-\nresentation of multi-field categorical data. The interactions among inter-field categories are explored\nby different neural gates and the informative ones are selected by -max pooling layers. Note that\nthe K-max pooling process acts like the classic Apriori algorithm in frequent itemset mining anc\nassociation rule learning . Repeating this pairwise interaction with K-ma\u00bb\npooling, our Cat2Vec model automatically extracts salient feature interactions and further explores\nhigher-order interactions.\nTo train the pairwise interaction Cat2Vec model effectively, we present a discriminant training\nmethod to estimate the category vectors. Furthermore, with the exploration of the pairwise and\nhigh-order category interactions, our Cat2Vec model attains great performance improvement over\nstate-of-the-art models in supervised learning tasks, such as user response rate prediction, while\nsuccessfully captures the most significant interactions in unsupervised learning tasks."}, {"section_index": "3", "section_name": "2.2 DISTRIBUTED REPRESENTATION", "section_text": "TARGET\n\nGENDER |\n\nWEEKDAY\n\nCITY\n\n\u2014\u2014\u2014\u2014\u2014 b\n\nBROWSER\n\n1\n0\n1\n0\n\nMALE\nFEMALE\nFEMALE\n\nMALE\n\nTUESDAY\nMonpDay\nTUESDAY\nTUESDAY\n\nBEING\nSHANGHAI\nHONGKONG\n\nBEWING\n\nCHROME\nIE\nIE\n\nCHROME\n\nNUMBER OF CATEGORY.\n\n>\n\n=\n\n35)\n\n6\nIn this section, we outline the major data representation methods that are used for representing the\ndiscrete categorical data. These methods serves as the preliminaries of our Cat2Vec model.\nt is common to use one-hot representation for discrete data in natural language processing or com.\nutational advertising tasks. For the first data sample as an example, the data is vectorised by one-ho\nncoding as\n[0,1] (0, 1,0,0,0,0,0],[0,...,0,1,0,...,O]ss1, [1,0,0,0,0,0] .\nSY eS Eee eae\nGENDER:MALE WEEKDAY:TUESDAY City:BEUING BROWSER:CHROME\n[0,1] \u2014 , (0,1, 0,0,0, 0,0], [0,...,0,1,0,...,O]351, [1,0,0,0,0,0] .\nSY eS Eee eae\nJENDER:MALE WEEKDAY: TUESDAY CiTy: BEING BROWSER:CHROME\nWith each category as a dimension, one-hot representation preserves full information of the original\ndata. Two main problems of one-hot representation are that (i) it may suffer from the curse of\ndimensionality, especially in deep learning-related applications; (ii) it cannot capture the similarity\nof each word/category pair, and we cannot even find any relationships among the synonyms or\ncategories in the same field.\nDistributed representation is first proposed by Hinton] (1986). The basic idea of distributed represen-\ntation is training the model to map each word into a d-dimension vector (generally, d is the hyper-\nparameter of the model, and d is far smaller than whole vocabulary size N of words/categories), and\nthe semantic similarity between the words/categories can be measured through the distance (such as\ncosine similarity, Euclidean distance) of their corresponding low dimension vectors. The Word2Vec\nis one of the most common methods to train the distributed word vector rep-\nresentation. Compared with text, with the local patterns among the neighbour words, multi-field\ncategorical data has no explicit order relationships among inter-field categories. Also, the text vo-\ncabulary size (10\u00b0) is often much smaller than the category size (10\u00b0 ~ 10\u00b0), making our problem\nmore difficult. Another difference between our Cat2Vec and Word2Vec is that Cat2Vec does not take\nthe order into account or use any sliding window for context; in other words, we take all categories\nin the same training sample as the neighbour of a category.\nInput\nSample|\n\n\"1 04: Weakday=Mong: Cly:Shanghal\n\nep P1204\n\n7\n\nCOD CODCOD COO\n\neg Ta\n\nEmbedding\nLayer\n\nGate _Internaction\nLayer\n\nK-Max Pooling\nLayer\n\nGate\n\nInternaction\nLayer\n\nK-Max Pooling\nLayer\n\nFC\nLayer\n\nOutput\n\nPrediction\nPairwise Interaction Sample Encoding Module\nIn this section, we introduce a pairwise interaction Cat2Vec model and its training method in detail\nWe design neural gates in the model to capture the interactions between each pair of categories\nfollowed by the A-max pooling layers to select the most important interactions. We then repea\nthis processes to explore higher level interactions. Figure [TJillustrates the overview of the proposec\narchitecture."}, {"section_index": "4", "section_name": "3.1 INTERACTION AND POOLING LAYERS", "section_text": "Interaction Layer. To evaluate the interaction between each pair of categories, we use a gate to\nobtain the interaction result. Mathematically, a gate is a function f : R\u00a2 x R\u00a2 \u2014 R\u00a2 that takes any\nPair of category vectors c; and c; in the same sample cas input, and outputs interaction result vector\n\nC= = fei, cj). The interaction output vector c/, ; acts as a certain combining feature of \u00a2; and c;.\nNote that c/ ,j keeps the same dimension as the category embedding vectors like c; and c; so that it\ncan be further used to interact with other categories.\nWe provide several options of gate f as:\nwhere \u00a9 is the element-wise multiplication operator. We can also can employ more complex gates,\nsuch as the highway gate (Srivastava et al.}[2015), which is formulated as\nfrit (e;,\u00a2)) =T \u00a9 g(Wu(ci + \u00a2;) + by) + (1\u2014T) \u00a9 (e: + \u20ac;)\nUl Ul Ul Ul Ul Ul\n= [eLo.Ci gs Ci ns 1Cn\u20142.n\u201419\u20acn\u2014inl:\nAfter the interaction, an activation function will be applied to implement the non-liner transforma-\ntion.\nJX-Max Pooling Layer. We next describe a pooling operation that is a generalisation of the max\npooling based on the norm length of interaction outputs of each pair of category vectors. We keep\nFigure 1: The proposed sample encoding module. At first, each category pair will be fed into a\ngate to get the interaction between two categories. Next, using K-max pooling to capture important\ninteractions. Repeat above two steps, which could capture higher level category interactions. Finally,\nwe use a full connection layer to transform final interaction vectors into the prediction.\nsum\nf\"(c1,\u00a2;) =e + cj,\n\n\u2018mul\nfr\" (i, Cj) = Ci \u00a9 Gj,\nTrue Sample\n\nTrue\nSample |_\u00bb| or\nEncoding False\n\nRandomly Generated\nFake Sample\n\nEncoded\nDiscriminant Cat2Vec Vector\nFigure 2: The discriminant Cat2Vec model which learns the category embedding by training a dis-\ncriminator to distinguish the true samples from the fake ones.\nthe A maximum interaction output vectors ch 5 according to their norm length, where Ix is the\nnumber of the original categories of the training sample. It would keep the max-pooling resul\nkmax = [C1,\u20ac9,\u00b0++ | having the same size with the original embedding matrix c and c\u2019, is the\n\nkma\nembedding vector in c\u2019 in Eq. (5) that has top-K normal length.\nBefore producing an output for the interaction results, the interaction and /\u2019-max pooling operation:\nwill be repeated for several times in order to capture high-level interactions among the different fielc\ncategory vectors. After that, we output a prediction from the final interaction vector representatior\nby a fully connected layer. Note that the above network structure can be used to build an auto.\nencoder to conduct unsupervised learning (2008). We leave this for future work\nwhile staying with the label output network for both supervised (containing both negative and pos.\nitive examples) and unsupervised (only containing positive examples where negative examples are\ngenerated randomly) learning tasks.\nAn interesting discussion is to compare our Cat2Vec model with association rules mining, which\naims to identify the most frequently appeared joint category instances (items), with or without a\ncondition. Apriori is a popular algorithm for association rules mining by\nexploiting dependencies between candidate frequent itemsets of length AK and frequent itemsets of\nlength ky \u2014 1. In our pairwise interaction Cat2Vec model, with neural networks, we provide an\nalternative way of generating such high-order interactions (thus itemsets) among category instances.\nVia the pooling operation, our model can also find the most frequent category set automatically,\nwhich will be demonstrated and tested from our experiments in the following Sections|4]and]5]\nTo train the pairwise interaction Cat2Vec model, we design a training scheme called discriminan\nCat2Vec, which would train the model in a supervised way for unsupervised learning of the data.\nIn the discriminant Cat2Vec, we feed the Sample Encoding Module showed in Figure[1] with a tru\nor fake sample, the encoded sample vector will be followed by an MLP to predict the probability p o\na true sample. As such, the generation of a fake sample would influence the learned category vecto\nIn this paper, we generate a fake sample following this way: first, randomly choose a sample fror\nthe training set; second, randomly choose several categories in this sample and replace them wit\nrandomly chosen categories that belong to the same field. For example, we get a user behaviour in\nstance x = [WEEKDAY: WEDNESDAY, IP:1.1.*.*, GENDER:MALE, CITY:BEIJING], and we ran\ndomly choose the category CITY:BEIJING and replace it with CITy:SHANGHAI, then we buil\na fake sample 2\u2019 = [WEEKDAY: WEDNESDAY, IP:1.1.*.*, GENDER:MALE, CITY:SHANGHAL\nThe discriminant network is then trained to predict whether the new sample should be a true sam\nple. The loss function of discriminant network is average cross entropy, which would maximise th\nlikelihood of correct prediction:\nM\n1\nL = Fy dL ~vilog(ps) \u2014 (1 \u2014 ys) log(t \u2014 pi)\n\ni=1\nwhere M is the number of training samples. The i-th sample is labelled with y; \u20ac {1,0}, which\nmeans true or fake sample, and ; is the predicted probability that the given training sample is true.\nTrue Sample\n\nTrue\nSample |_\u00bb| or\nEncoding False\n\n2andomly Generated\nFake Sample\n\nEncoded\nDiscriminant Cat2Vec Vector\nLo Embedding\n\n<= Pair-Wise Ranking Correlation\n\u2018++ Triple-Wise Ranking Correlation\n4-4 Pair-Wise Precision\n\n= ++ Triple-Wise Precision\n\n04\n\n02\n\n00\n\n800\n\n005 0.10 0.15\nDropout Rate\n\n0.20\n\n800 0.05 0.10 0.50.20\nDropout Rate\n\n0.00 0.05 010 0.15 0.20\nDropout Rate\nFigure 3: Precision and rank correlation on synthetic data, bigger embedding size and appropriate\ndropout rate leads to achieve better performance."}, {"section_index": "5", "section_name": "| SYNTHETIC DATA EXPERIMENTS", "section_text": "To explore and add our understanding of the pairwise interaction Cat2Vec model, we conduct \u00ab\nsimulation test with synthetic data. In particular, we are interested in understanding how the learned\nvectors would be able to capture and leverage the most significant patterns embedded in the data."}, {"section_index": "6", "section_name": "4.1 SYNTHETIC DATASET AND EVALUATION METRICS", "section_text": "To simulate the real-world multi-field categorical data, we use multivariate normal sampling to\ngenerate the true data distribution for the following experiments [i]. Suppose the data has 4\nfields {A, B,C, D}, each field contains 10 categories, and a sample can be represented as x =\n(ai, bi, c:, di). We then randomly generate the means and covariance matrix for 4-dimensional trun-\ncated multivariate normal sampling with two-sided truncation. This sampling method can generate\n4 float numbers between 0 and 10. We can convert the float numbers to integer which can represent\nthe categories in 4 fields. In such a way, we can generate the data with specific joint distribution,\nwhich means certain categorical pair or 3-tuple like p(a4,b4) or p(a3,cs5,dg) may have a higher\njoint distribution probability. Recall that in our pairwise interaction Cat2Vec model, we have a K-\nmax pooling layer, which will select the most popular category pairs in the dataset. Repeating the\npairwise interaction layers and /\u2019-max pooling layers, we can also explore a high order categorical\n3-tuple or 4-tuple etc. Therefore, our task here is to evaluate if our model would be able to capture\nthese frequently occurred patterns from a given dataset; in other words, to test if our model would be\nable to keep the category pairs with the highest joint distribution probabilities in the K\u2019-max pooling\n\nresults. This processes is in line with association rule mining (Agrawal et al.|/1994), exploring the\n\nfrequent categorical n-tuple from frequent categorical (n \u2014 1)-tuple.\nWe generate the positive data according to the above truncated multivariate normal sampling and\nchoose uniform sampling to generate the fake (negative) data. We then apply discriminant Cat2Vec\nto train the model. Because we know the true distribution of the generated real data, the most fre-\nquent category pairs/triples are known. We use precision and Spearman\u2019s rank correlation coefficient\nto evaluate the results of 1st/2nd K-max pooling layer (category pairs/triples pooling results), to see\nif the model can learn the true joint distribution in the real data. The details of the evaluation metrics\nare described in the following section.\nTo evaluate how our network structure and K\u2019-max pooling help identify the significant n-tuples, we\nfeed 1000 samples to the trained model and record the Ist and 2nd /K-max pooling layers\u2019 results.\nThen we count the frequency of the category pairs/3-tuples in the real samples, and select top 20\nranked category pairs/3-tuples as target. Then we count the frequency of max-pooled category\npairs/triples in the results and compare the top 20 frequent category pairs/3-tuples in the results\nto calculate precision and Spearman\u2019s rank correlation coefficient. Precision measures the fraction\nof category pairs/triples in the results that are also in the target. The Spearman\u2019s rank correlation\ncoefficient measures the correlation between two ranked lists."}, {"section_index": "7", "section_name": "4.2 RESULT AND DISCUSSION", "section_text": "Figure[3]summarises the results of the precision and the rank correlation on synthetic data. We can\nsee that our model can easily find over 80% of the category pairs with high joint distribution proba-\nbilities under the best parameter settings. From the rank correlation, our model can make the ranking\ncorrelation over 0.6 of category pairs which means the category pairs with higher joint distribution\nprobability would be more possible to appear in the A\u2019-max pooling result. As for the category\ntriples case, the precision and rank correlation become lower than the category pairs\u2019, because find-\ning 3-order combination is harder and relies on the accuracy from the 2-order. We also vary the\ndropout rate against those measures. It shows that dropout tends to help improving the accuracy of\ncaptured patterns. This can be explained by considering the fact that dropout brings randomness\ninto the selection and allows exploration. But the best dropout rate seems rather arbitrary and highly\ndependent on the other parameter settings."}, {"section_index": "8", "section_name": ") REAL-WORLD DATA EXPERIMENTS", "section_text": "In this section, wwe continue our experiment using 4 Bota) world advertising dataset for click-through\nrate estimatior?] The iPin You dataset (Liao et al 2014) is a public real-world display ad dataset\nwith each ad display information and thiao et 3H) 2 user click feedback (Zhang et al.||2014). Thi:\ndataset contains around 19.5M ad display instances with 14.8k positive user feedback (click). Eack\n\ninstance has 23 fields, and we choose 18 fields of them which have categories with occurrence larget\nthan 10/\u00b0]"}, {"section_index": "9", "section_name": "5.1 UNSUPERVISED LEARNING EXPERIMENT", "section_text": "We continue our study on the model\u2019s ability of capturing the most significant patterns as we de\nscribed in Section Because the iPin You dataset contains the unencrypted fields and categories\ne.g. city, region and tag, so we choose the iPinYou dataset which has been introduced above a:\nreal positive) data. As for the fake (negative) data, we randomly choose a sample in the iPin Yo\ndataset and randomly replace some categories with other categories in the same field to generate th\nfake data, similar to what we have introduced in Section We also set up two baseline model.\nto compare the model accuracy performance: (i) DNN Concat model, which concatenates cate\ngory embedding vectors to make prediction, and (ii) DNN Sum model, which sums up the category\nembedding vectors to make the prediction.\nWe have tried different parameter settings and the performance is measured by the accuracy of ou\nmodel to predict real samples. We also calculate the rank correlation coefficient and the precision tc\nevaluate our model the same as we described in Section"}, {"section_index": "10", "section_name": "5.1.1 RESULT AND DISCUSSION", "section_text": "we see that on the iPinYou dataset, our pairwise interaction models can achieve the\naccuracy of 85% which is about 1.7% improvement comparing with the simple DNN models. Ever\nthe worst case in our model is better than the DNN models\u2019 best case. It means our model can finc\nthe extra information during the interactions and the A-max pooling processes. In addition, the\nmodel with interaction times as 3 usually yields better performance than that with interaction times\nas 2, which may be due to the fact that the more interaction times capture higher-order interactions\nand help make more accurate predictions. But the model with different gate types does not lead tc\nsignificant difference.\nWe next use the same evaluation metrics that described in Sectio: to test the ability of capturing\ndata patterns. We find that in the real-world dataset, our model is still able to keep high precision\nand rank correlation and can achieve even better performance. The precision and rank correlation on\ncategory pairs are over 0.8 which is a 30% improvement comparing to the performance on synthetic\n~ ihe selected hleids are WEEKDAY, HOUR, USER AGENT, IP, REGION, CITY, AD EXCHANGE, DOMAIN,\nURL, AD SLOT ID, AD SLOT WIDTH, AD SLOT HEIGHT, AD SLOT VISIBILITY, AD SLOT FORMAT, AD SLOT\nFLOOR PRICE, CREATIVE ID, KEY PAGE URL, AND USER TAGS.\nLo Embedding\n\n<= Pair-Wise Ranking Correlation\n\u2018++ Triple-Wise Ranking Correlation\n4-4 Pair-Wise Precision\n\n= ++ Triple-Wise Precision\n\n08\n\n06\n\n04\n\n02\n\n00\n\n800\n\n005 0.10 0.15\nDropout Rate\n\n0.05 0.10 0.15\nDropout Rate\n\n0.20\n\n0.00 0.05 0.10 0.15 0.20\nDropout Rate\nFigure 4: Precision and Rank Correlation on iPinYou Data; bigger embedding size and appropriate\ndropout rate leads to achieve better performance.\nTable 2: Accuracy of distinguishing true impression from fake impression; embedding means em-\nbedding vector size and interaction is interaction times in our model.\ndataset. For the category triples case, we also have similar performance compared with the synthetic\ndataset."}, {"section_index": "11", "section_name": "5.2. CLICK-THROUGH RATE PREDICTION EXPERIMENT", "section_text": "We now move to the evaluation on a supervised learning task. We consider click-through rate (CTR)\nprediction, which is important for many personalised Web services such as E-commerce, social rec-\nommendation and computational advertising 2 The most widely used CTR estima-\ntion model is the logistic regression based on one-hot data representation. Many deep learning mod-\nels have been further investigated in CTR prediction. {Zhang et al.| (2016) proposed Factorisation-\nMachine Supported Neural Networks (FNN) models for user response prediction. Convolutional\nClick Prediction Model (CCPM) (Liu et al.| 2015) has been used in CTR prediction and gain some\nimprovement on this task. To our knowledge, all of above previous work focuses on directly im-\nproving the prediction performance in supervised learning tasks and none of them investigates the\nlearned representation of multi-field categorical data or how to learn the better representation.\nIn order to investigate our pairwise interaction model in the CTR task, we use the pairwise interaction\nsample encoding module to encode a training sample concatenated with the embedding vectors,\nwhich is followed by an MLP (multi-layer perceptron) to predict click-through probability. We\nchoose following models as strong baselines:\nLogistic Regression (LR): LR is a widely used linear model (Richardson et al.|/2007).\n\nFactorisation Machine (FM): Simply apply the factorisation machine on one-hot encoded\n\nsparse features of the training sample (Rendle| |2010).\nCCPM: CCPM 2015) is a convolutional model for click prediction.\n\nFNN: A DNN model based on concatenated category vectors following with MLPs, being\nable to capture high-order latent patterns of multi-field categorical data (Zhang et al.|/2016).\n\nCat2Vec-FNN-1: This is our proposed architecture that only concatenates pairwise inter-\naction output vectors among /\u2019-max pooling results to form the final vector representation\nand make prediction.\n0.85 4 0.85 4 0.85\n\n0.84\n\n: 0.84 f..: 4 84 bes\n0.83 L - 0.83 4 0.83 b..k\n0.82 j b 0.82 4 0.82 F.\n0.81 4 0.81 4 ogi.\n0.80 i 0.80 ; 2 0.80\n\n0.0 0.1 0.2 03 04 05 1 2 relu identity tanh sigmoid\nDropout Interaction Times Activation Functions\n\ninyou Dataset AUC\ninyou Dataset AUC\n\niPinyou Dataset AUC\nFigure 5: Performance Comparison over different Parameter Settings\nWe use Area Under ROC Curve (AUC) as the evaluation metrics to measure the performance o!\na prediction. Also we conduct the grid search for each model to make sure that each model ha:\nachieved its best performance. Specifically, empirically optimal hyperparameters are set as: the\ncategory embedding size is 16, the SGD batch size is 64, the Nadam (Sutskever et al. |2013) is se\nas SGD optimiser with default settings, the gate type is MUL and the norm type for -Max Pooling\nis L2 norm, and the activation function as tanh. Then the model followed by three fully connectec\nlayer with width [128, 32, 1]. We also try different interaction times and finally set it as two (3-tuple)\nsuggesting that a high order of interactions helps improve the performance, but more than two woulc\noverfit the data and thus managed the performance,"}, {"section_index": "12", "section_name": "5.2.1 RESULT AND DISCUSSION", "section_text": "Table[3]gives the results of our CTR experiment, compared with various baselines. We see that there\nis about 3% improvement over LR. The AUC performance of the proposed Discrimination Cat2Vec\nmodels also outperforms the FM/CCPM/FNN model, as our model would be able to take higher\norder information into consideration, which helps make better decision.\nIn our pairwise interaction model, we also test different hyperparameters and settings, and the resul\nis given in Figure [5] First, we evaluate the performance over different dropout rates, and find tha\nsetting dropout as 0.1 would be the best, as shown in Figure [5] We also explore the impact o:\ninteraction. From the result, the model with 2 interaction times would have better generalisation or\nthe test set. Finally, we compare three different activation functions (sigmoid, tanh, relu) and se\nidentity mapping as the baseline. The result shows that \u201ctanh\u201d yields the best performance, whict\nhas the advantages of non-linear transformation between (\u20141, 1), and it may help gain more benefit:\non multi-field categorical data.\nIn this paper we have proposed a novel Cat2Vec model working on the multi-field categorical data\nDifferent from the other models, Cat2Vec repetitively computes and selects inter-field category pair\nwise interactions to explore high-level interactions, which is analogous to the Apriori algorithm it\nassociation rule mining. Moreover, we present an efficient discriminant training method to estimate\nthe category vectors. We also apply our pairwise interaction model on CTR prediction, of whict\nwe have observed a significant performance gain over several strong baselines. For future work, we\nplan to design more sophisticated gates to explore different interaction patterns among inter-fielc\ncategories; also leveraging Cat2Vec in various data mining problems is of great interest to us.\ne Cat2Vec-FNN-2: This is our proposed architecture that explore category vectors pairwis\u00ab\ninteraction result between /\u2019-max pooling results and category embeddings to form th\nfinal vector representation and make prediction."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Rakesh Agrawal, Ramakrishnan Srikant, et al. Fast algorithms for mining association rules. In Proc\n20th int. conf. very large data bases, VLDB, volume 1215, pp. 487-499, 1994.\nOlivier Chapelle, Eren Manavoglu, and Romer Rosales. Simple and scalable response prediction\nfor display advertising. ACM Transactions on Intelligent Systems and Technology (TIST), 5(4):\n61, 2015.\nAlan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-\nrent neural networks. In JCASSP, pp. 6645-6649. IEEE, 2013.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-\nlutional neural networks. In NJPS,. pp. 1097-1105. 2012.\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represer\ntations in vector space. arXiv preprint arXiv: 1301.3781, 2013.\nSteffen Rendle. Factorization machines. In 2010 IEEE International Conference on Data Mining.\npp. 995-1000. IEEE, 2010.\nIlya Sutskever, James Martens, George E Dahl, and Geoffrey E Hinton. On the importance o:\ninitialization and momentum in deep learning. JCML (3), 28:1139-1147, 2013.\nPascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting anc\ncomposing robust features with denoising autoencoders. In Proceedings of the 25th internationa\nconference on Machine learning. po. 1096-1103. ACM. 2008.\nWeinan Zhang, Tianming Du, and Jun Wang. Deep learning over multi-field categorical data. In\nEuropean Conference on Information Retrieval, pp. 45\u201457. Springer, 2016.\nYoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic\nlanguage model. journal of machine learning research. 3(Feb):1137\u20141155. 2003."}]
HJ5PIaseg
[{"section_index": "0", "section_name": "TOWARDS AN AUTOMATIC TURING TEST:\nLEARNING TO EVALUATE DIALOGUE RESPONSE:", "section_text": "Ryan Lowe\u2018\nNicolas Angelard-Gontier \u201c\n\u00b0 Reasoning and Learning Lab, School of Computer Science, McGill University\n\u00a9 Montreal Institute for Learning Algorithms, Universit\u00e9 de Montr\u00e9al\n+ CIFAR Senior Fellow\nAutomatically evaluating the quality of dialogue responses for unstructured do-\nmains is a challenging problem. Unfortunately, existing automatic evaluation\nmetrics are biased and correlate very poorly with human judgements of response\nquality (Liu et al.|/2016). Yet having an accurate automatic evaluation procedure\nis crucial for dialogue research, as it allows rapid prototyping and testing of new\nmodels with fewer expensive human evaluations. In response to this challenge, we\nformulate automatic dialogue evaluation as a learning problem. We present an eval-\nuation model (ADEM) that learns to predict human-like scores to input responses,\nusing a new dataset of human response scores. We show that the ADEM model\u2019s\npredictions correlate significantly, and at level much higher than word-overlap met-\nrics such as BLEU, with human judgements at both the utterance and system-level.\nWe also show that ADEM can generalize to evaluating dialogue models unseen\nduring training, an important step for automatic dialogue evaluation."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Learning to communicate with humans is a crucial ability for intelligent agents. Among the primary\nforms of communication between humans is natural language dialogue. As such, building systems\nthat can naturally and meaningfully converse with humans has been a central goal of artificial\nintelligence since the formulation of the Turing test (Turing||T950). Research on one type of such\nsystems, sometimes referred to as non-task-oriented dialogue systems, goes back to the mid-60s with\nWeizenbaum\u2019s famous program ELIZA: a rule-based system mimicking a Rogerian psychotherapist\nby persistently either rephrasing statements or asking questions (1966). Recently, there\nhas been a surge of interest in the research community towards building large-scale non- rask-orienved\ndialogue systems using neural networks (Sordoni et al_| (Sordoni et al.| [2015b} [Shang et al. |2015}/Vinyals & Le\n2015} [Serban et al.| }[2015). These models are trained in an end-to-end manner to\noptimize a single objective, usually the likelihood of generating the responses from a fixed corpus\nSuch models have already had a substantial impact in industry, including Google\u2019s Smart Reply\nsystem (Kannan et al. /2016), and Microsoft\u2019s Xiaoice chatbot (Markoff & Mozur}/2015), which has\nover 20 million users. More recently, Amazon has announced the Alexa Prize Challenge: a research\ncompetition with the goal of developing a natural and engaging chatbot system (Farber]\nOne of the challenges when developing such systems is to have a good way of measuring progress\nin this case the performance of the chatbot. The Turing test provides one solution to the evaluation\nof dialogue systems, but there are limitations with its original formulation. The test requires live\nhuman interactions, which is expensive and difficult to scale up. Furthermore, the test requires\ncarefully designing the instructions to the human interlocutors, in order to balance their behaviour\nand expectations so that different systems may be ranked accurately by performance. Although\nunavoidable, these instructions introduce bias into the evaluation measure. The more common\napproach of having humans evaluate the quality of dialogue system responses, rather than distinguish\nthem from human responses, induces similar drawbacks in terms of time, expense, and lack of\n\u201cThe second and third authors contributed equally."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "scalability. In the case of chatbots designed for specific conversation domains, it may also be difficult\nto find sufficient human evaluators with appropriate background in the topic (e.g./Lowe et al.](2015))\nDespite advances in neural network-based models,\nevaluating the quality of dialogue responses auto-\nmatically remains a challenging and under-studied\nproblem in the non-task-oriented setting. The most\nwidely used metric for evaluating such dialogue sys-\ntems is BLEU (Papineni et al.||2002), a metric mea-\nsuring word overlaps originally developed for ma-\nchine translation. However, it has been shown that\nBLEU and other word-overlap metrics are biased\nand correlate poorly with human judgements of re-\nsponse quality (Liu et al.||2016). There are many\nobvious cases where these metrics fail, as they are\noften incapable of considering the semantic similar-\nity between responses (see Figure[Ip. Despite this,\nmany researchers still use BLEU to evaluate their\ndialogue models (Ritter et al.|!2011}/Sordoni et al.\n\n2015b} Li et al.|/2015} {Galley et al.|/2015{|Li et al\nTo make progress towards this goal, we first collect a dataset of human scores to various dialogue\nresponses, and we use this dataset to train an automatic dialogue evaluation model, which we call\nADEM. The model is trained in a semi-supervised manner using a hierarchical recurrent neural\nnetwork (RNN) to predict human scores. We show that ADEM scores correlate significantly, and at a\nlevel much higher than BLEU, with human judgement at both the utterance-level and system-level.\nCrucially, we also show that ADEM can generalize to evaluating new models, whose responses were\nunseen during training, without a drop in performance, making ADEM a strong first step towards\neffective automatic dialogue response evaluation|\"]\nTo train a model to predict human scores to dialogue responses,\nwe first collect a dataset of human judgements (scores) of Twitter\nresponses using the crowdsourcing platform Amazon Mechanical\nTurk (AMT)|*|The aim is to have accurate human scores for a\nvariety of conversational responses \u2014 conditioned on dialogue\ncontexts \u2014 which span the full range of response qualities. For\nexample, the responses should include both relevant and irrelevant\nresponses, both coherent and non-coherent responses and so on.\nTo achieve this variety, we use candidate responses from several\n, we use the following\n4 sources of candidate respons a response selected by a\nTF-IDF \u201cDED (Lancers based mous (2): a response selected by the Dual\nEncoder (DE) (Lowe et al.]{2015), (3) a response generated using\nthe hierarchical recurrent se enter (HRED) model\net al.|{2016ap, and (4) human-generated responses. It should be\nnoted that the human-generated candidate responses are not the\nreference responses from a fixed corpus, but novel human respo\nreference. In addition to increasing response variety, this is necessary\nmodel to learn to compare the reference responses to the candidate\nFigure 1: Example where word-overlap scores\n(e.g. BLEU) fail for dialogue evaluation; al-\nthough the model response is completely rea-\nsonable, it has no words in common with the\nreference response, and thus would be given\nlow scores by metrics such as BLEU.\n# Examples 4104\n# Contexts 1026\n# Training examples 2,872\n# Validation examples 616\n# Test examples 616\nk\u00ab score (inter-annotator 0.63\n\ncorrelation)\nTable 1: Statistics of the\ndialogue response evaluation\ndataset. Each example is in\nthe form (context, model re-\n\nsponse, reference response, hu-\nman score).\nSecond, we filtered these human-generated responses for potentially offensive language, and cor\nbined them with approximately 1,000 responses from each of the above models into a single set \u00a2\nresponses. We then asked AMT workers to rate the overall quality of each response on a scale \u00a2\n1 (low quality) to 5 (high quality). Each user was asked to evaluate 4 responses from 50 differet\ncontexts. We included four additional attention-check questions and a set of five contexts was given t\neach participant for assessment of inter-annotator agreement. We removed all users who either faile\nan attention check question or achieved a \u00ab inter-annotator agreement score lower than 0.2 (Cohe\n[1968). The remaining evaluators had a median x score of 0.63, indicating moderate agreement. Th\nis consistent with results from/Liu et al./(2016). Dataset statistics are provided in Tablel1]\nMeasurement \u00ab& score\n\nOverall 0.63\nTopicality 0.57\nInformativeness 0.31\n\nBackground 0.05\nTable 2: Median x. inter-\nannotator agreement scores\nfor various questions asked\nin the survey.\nRecurrent neural networks (RNNs) are a type of neural network with time-delayed connection:\nbetween the internal units. This leads to the formation of a hidden state h,, which is updated fo\nevery input: hy = f(Wanhi\u2014-1 + Winx), where Win and W;;, are parameter matrices, f is a smoot!\nnon-linear activation function such as tanh, and 2; is the input at time t. The hidden state allows fo\nRNNs to better model sequential data, such as natural language.\nOne of the most popular approaches for automatically evaluating the quality of dialogue response:\nis by computing their word overlap with the reference response. In particular, the most popula\nmetrics are the BLEU and METEOR scores used for machine translation, and the ROUGE score\nused for automatic summarization. While these metrics tend to correlate with human judgements ir\ntheir target domains, they have recently been shown to highly biagsed and correlate very poorly witl\nWe conducted two rounds of AMT experiments. We first asked AMT workers to provide a reasonable\ncontinuation of a Twitter dialogue (i.e. generate the next response given the context of a conversation)\nEach survey contained 20 questions, including an attention check question. Workers were instructed\nto generate longer responses, in order to avoid simple one-word responses. In total, we obtained\napproximately 2,000 human responses.\nIn initial experiments, we also asked humans to provide scores fo\ntopicality, informativeness, and whether the context required back\nground information to be understandable. Note that we did not as!\nfor fluency scores, as 3/4 of the responses were produced by human\n(including the retrieval models). We found that scores for informe\ntiveness and background had low inter-annotator agreement (Table[2\nand scores for topicality were highly correlated with the overall scor\n(Pearson correlation of 0.72). Results on these auxiliary question\nvaried depending on the wording of the question. Thus, we continue:\nour experiments by only asking for the overall score. We provid\nmore details concerning the data collection in the Appendix, as it ma:\naid others in developing effective crowdsourcing experiments.\nTo train evaluation models on human judgements, it is crucial that we obtain scores of responses\nthat lie near the distribution produced by state-of-the-art models. This is why we use the Twitter\nCorpus (Ritter et al.|/2011), as such models are pre-trained and readily available. Further, the set of\ntopics discussed is quite broad \u2014 as opposed to the very specific Ubuntu Dialogue Corpus \u2014 and\ntherefore the model should generalize better to other domains involving chit-chat. Finally, since it\n\ndoes not require domain specific knowledge (e.g. technical knowledge), it should be easy for AMT\nworkers to annotate.\nIn this paper, we consider RNNs augmented with long-short term memory (LSTM) units\n& Schmidhuber}|1997). LSTMs add a set of gates to the RNN that allow it to learn how much to\n\nupdate the hidden state. LSTMs are one of the most well-established methods for dealing with the\n\nvanishing gradient problem in recurrent networks (Hochreiter|/T991}{Bengio et al.|/1994).\ncontext hidden state\n\n& >|\n\nr\n\nencoder \u20147| Bre\nhidden state\n_ @D C\u00ae (9)\nron a + Wen Wel We,2 ++ Wen\nContext, c\n\n\u2018ttt\n\nWet Wr2 am\nTrue response, r\n\nWet We2 --\n\nModel response,\n\nv0 hea hi2\n@ @&\n\n1\nFigure 2: The ADEM model, which uses a hierarchical encoder to produce the context embedding c\nhuman judgements for dialogue response evaluation (\n\n(Liu et al||2016). We briefly describe BLEU\n\nhere, and provide a more detailed summary of word-overlap metrics in the Appendix.\nDrawbacks One of the major drawbacks of word-overlap metrics is their failure in capturing the\nsemantic similarity between the model and reference responses when there are few or no commor\nwords. This problem is less critical for machine translation; since the set of reasonable translations\nof a given sentence or document is rather small, one can reasonably infer the quality of a translated\nsentence by only measuring the word-overlap between it and one (or a few) reference translations\nHowever, in dialogue, the set of appropriate responses given a context is much larger (Artstein et al.\n2009); in other words, there is a very high response diversity that is unlikely to be captured by\nrd-overlap comparison to a single response.\nFurther, word-overlap scores are computed directly between the model and reference responses. A:\nsuch, they do not consider the context of the conversation. While this may be a reasonable assumptiot\nin machine translation, it is not the case for dialogue; whether a model response is an adequat\nsubstitute for the reference response is clearly context-dependent. For example, the two response:\nin Figure[f]are equally appropriate given the context. However, if we simply change the context tc\n\u2018Have you heard of any good movies recently?\u201d , the model response is no longer relevant while th\nreference response remains valid."}, {"section_index": "3", "section_name": "1 AN AUTOMATIC DIALOGUE EVALUATION MODEL (ADEM)", "section_text": "To overcome the problems of evaluation with word-overlap metrics, we aim to construct a dialogue\nevaluation model that: (1) captures semantic similarity beyond word overlap statistics, and (2) exploit:\nboth the context of the conversation and the reference response to calculate its score for the mode\nresponse. We call this evaluation model ADEM.\nADEM learns distributed representations of the context, model response, and reference response using\na hierarchical RNN encoder. Given the dialogue context c, reference response r, and model response\n?, ADEM first encodes each of them into vectors (c, f, and r, respectively) using the RNN encoder\nThen, ADEM computes the score using a dot-product between the vector representations of c, 7, and 7\nin a linearly transformed space: :\nwhere M, N \u20ac R\u201d are learned matrices initialized to the identity, and a, ( are scalar constants used\nto initialize the model\u2019s predictions in the range [0,5]. The model is shown in Figure|2]\nThe matrices M and N can be interpreted as linear projections that map the model response f into\nthe space of contexts and reference responses, respectively. The model gives high scores to responses\nthat have similar vector representations to the context and reference response after this projection\nThe model is end-to-end differentiable; all the parameters can be learned by backpropagation. In our\nscore(c, r.\n\n\u201ce hidden state\n\n3 ns\nho ey feo ashes\nencoder ~?| Sea +9 gg +g pe\nhidden state\nGD C\u00ae @d\n2 mee We We,1 We,2 - 2 ae we - Wen = a \u00a9\n\nContext. \u00a2 True response. r Model response, fa\nBLEU BLEU (Papineni et al.}/2002) analyzes the co-occurrences of n-grams in the ground truth\n\nand the proposed responses. It computes the n-gram precision for the whole dataset, which is then\nmultiplied by a brevity penalty to penalize short translations. For BLEU-N, N denotes the largest\nvalue of n-grams considered (usually N = 4).\nscore(c,r,f) = (ec M# +r? N\u00e9 \u2014 a)/8\nimplementation, the parameters 9 = {M, N} of the model are trained to minimize the squared error\nbetween the model predictions and the human score, with L1-regularization:\nL= Ss [score(ci, ri, *;) \u2014 human_score;]? + y||4]|;\ni=UK\nwhere \u00a5 is a scalar constant. The simplicity of our model leads to both accurate predictions and fast\nevaluation time (see Appendix), which is important to allow rapid prototyping of dialogue systems.\nPre-training with VHRED We would like an evaluation model that can make accurate predictions\nfrom few labeled examples, since these examples are expensive to obtain. We therefore employ\nsemi-supervised learning, and use a pre-training procedure to learn the parameters of the encoder. In\nparticular, we train the encoder as part of a neural dialogue model; we attach a third decoder RNN\nthat takes the output of the encoder as input, and train it to predict the next utterance of a dialogue\nconditioned on the context.\nThe dialogue model we employ for pre-training is the latent variable hierarchical recurrent encode1\ndecoder (VHRED) model (Serban et al} BOTGE). The VHRED model is an extension of the origina\nhierarchical recurrent encoder-decoder (HRED) model with a turn-leve\nstochastic latent variable. The dialogue context is encoded into a vector using our hierarchica\nencoder, and the VHRED then samples a Gaussian variable that is used to condition the decoder (se:\nAppendix for further details). After training VHRED, we use the last hidden state of the context-leve\nencoder, when c, r, and 7 are fed as input, as the vector representations for c,r, and f, respectively\nWe use representations from the VHRED model as it produces more diverse and coherent response:\ncompared to its HRED counterpart.\nMaximizing the likelihood of generating the next utterance in a dialogue is not only a convenient\nway of training the encoder parameters; it is also an objective that is consistent with learning useful\nrepresentations of the dialogue utterances. Two context vectors produced by the VHRED encoder are\nsimilar if the contexts induce a similar distribution over subsequent responses; this is consistent with\nthe formulation of the evaluation model, which assigns high scores to responses that have similar\nvector representations to the context. VHRED is also closely related to the skip-thought-vector\nmodel ), which has been shown to learn useful representations of sentences for\nmany tasks, including semantic relatedness and paraphrase detection. The skip-thought-vector model\ntakes as input a single sentence and predicts the previous sentence and next sentence. On the other\nhand, VHRED takes as input several consecutive sentences and predicts the next sentence. This\nmakes it particularly suitable for learning long-term context representations.\nIn order to reduce the effective vocabulary size, we use byte pair encoding (BPE) 1994)\nSennrich et al.||2015), which splits each word into sub-words or characters. We also use layer\n\nnormalization (Ba et al.|/2016) for the hierarchical encoder, which we found worked better at the\n\ntask of dialogue generation than the related recurrent batch normalization (Ioffe & Szegedy\\|2015\n(2016). To train the VHRED model, we employed several of the same techniques\nfound in|Serban et al.|(2016b) and/Bowman et al.|(2016): we drop words in the decoder with a fixed\nhe hierarchical RNN encoder in our model consists of two layers of RNNs (EI Hihi & Bengio\n\n[Sordoni et al. 2015a). The lower-level RNN, the wtterance-level encoder, takes as input words\nfrom the dialogue, and produces a vector output at the end of each utterance. The context-level\nencoder takes the representation of each utterance as input and outputs a vector representation of\nhe context. This hierarchical structure is useful for incorporating information from early utterances\nin the context . Following previous work, we take the last hidden state of the\n-ontext-level encoder as the vector representation of the input utterance or context.\nAn important point is that the ADEM procedure above is not a dialogue retrieval model. The\nfundamental difference between ADEM and a dialogue model is that ADEM has access to the reference\nresponse. Thus, ADEM can compare a model\u2019s response to a known good response, which is\nignificantly easier than inferring response quality from solely the context.\nFull dataset Test set\nMetric Spearman Pearson Spearman Pearson\nBLEU-1I 0.026 (0.102) 0.055 (<0.001) \u2014_-0.036 (0.413) 0.074 (0.097)\nBLEU-2 0.039 (0.013) 0.081 (<0.001) 0.051 (0.254) \u2014 0.120 (<0.001)\nBLEU-3 0.045 (0.004) 0.043 (0.005) 0.051 (0.248) 0.073 (0.104)\nBLEU-4 0.051 (0.001) 0.025 (0.113) 0.063 (0.156) 0.073 (0.103)\nROUGE 0.062 (<0.001) 0.114 (<0.001) 0.096 (0.031) \u2014- 0.147 (<0.001)\nMETEOR 0.021 (0.189) 0.022 (0.165) 0.013 (0.745) 0.021 (0.601)\nT2V 0.140 (<0.001) 0.141 (<0.001) 0.140 (<0.001) 0.141 (<0.001)\nVHRED -0.035 (0.062) _-0.030 (0.106) \u2014_-0.091 (0.023) _ -0.010 (0.805)\n\nValidation set Test set\nC-ADEM 0.272 (<0.001) 0.238 (<0.001) 0.293 (<0.001) 0.303 (<0.001)\nR-ADEM 0.428 (<0.001) 0.383 (<0.001) 0.409 (<0.001) 0.392 (<0.001)\nADEM (T2V) 0.395 (<0.001) 0.392 (<0.001) 0.408 (<0.001) 0.411 (<0.001)\nADEM 0.436 (<0.001) 0.389 (<0.001) 0.414 (<0.001) 0.395 (<0.001)\nTable 3: Correlation between metrics and human judgements, with p-values shown in brackets\n\u2018ADEM (T2V)\u2019 indicates ADEM with tweet2vec embeddings 2016), and \u2018VHRED\nindicates the dot product of VHRED embeddings (i.e. ADEM at initialization). C- and R-ADEM\nrepresent the ADEM model trained to only compare the model response to the context or reference\nresponse, respectively.\nModel scores,\n\nHuman scores\n\n(a) BLEU-2\n\nModel scores\n\nHuman scores\n\n(b) ROUGE\n\nModel scores\n\nHuman scores\n\n(c) ADEM\nModel scores,\n\nHuman scores\n\n(a) BLEU-2\n\nModel scores\n\nHuman scores\n\n(b) ROUGE\n\nModel scores\n\nHuman scores\n\n(c) ADEM\nFigure 3: Scatter plot showing model against human scores, for BLEU-2 and ROUGE on the full\ndataset, and ADEM on the test set. We add Gaussian noise drawn from \\V(0, 0.3) to the integer human\nscores to better visualize the density of points, at the expense of appearing less correlated.\nrate of 25%, and we anneal the KL-divergence term linearly from 0 to 1 over the first 60,000 batches\n\nWe use Adam as our optimizer (Kingma & Bal|2014).\nFor training VHRED, we use a context embedding size of 2000. However, we found the ADEN\nmodel learned more effectively when this embedding size was reduced. Thus, after training VHRED\nwe use principal component analysis (PCA) [1901) to reduce the dimensionality of th\ncontext, model response, and reference response embeddings to n. While our results are robust t\nn, we found experimentally that n = 7 provided slightly improved performance. We provide othe\nhyperparameter values in the Appendix.\nWhen evaluating our models, we conduct early stopping on a separate validation set to obtain the best\nparameter setting. For the evaluation dataset, we split the train/ validation/ test sets such that there is\nno context overlap (i.e. the contexts in the test set are unseen during training).\nUtterance-level correlations We first present new utterance-level correlation result\nword-overlap metrics, in addition to results with embedding baselines and ADEM, in Table|3| The\n*We present both the Spearman correlation (computed on ranks, depicts monotonic relationships) and Pearsot\ncorrelation (computed on true values, depicts linear relationships)\nScaled Metric\n\n12\n10\n08\n0.6\n04\n0.2\n0.0\n-0.2\n\nBLEU-2 BLEU-4 ADEM\ne\ne\n% e \u00b0 \u00ae\nScaled Metric\n\nBLEU-2 BLEU-4 ROUGE ADEM\ne e e e \u2018e\n_ e\ne\n% e e \u00b0 \u00ae\n15 20 25 30 35 40 45\n\n15 20 25 30 35 40 45 15 20 25 30 35 40 45 15 20 25 30 35 40 45\nFigure 4: Scatterplots depicting the system-level correlation results for BLEU-2, BLEU-4, ROUGE\nand ADEM on the test set. Each point represents the average scores for the responses from a dialogue\nmodel (TFIDF, DE, HRED, human). Human scores are shown on the horizontal axis, with normalizec\nmetric scores on the vertical axis. The ideal metric has a perfectly linear relationship.\nbaseline metrics are evaluated on the entire dataset of 4,104 responses|*} We measure the correlatio\nfor ADEM on the validation and test sets (616 responses each).\nWe also conduct an additional analysis of the response data from|Liu et al.|(2016), where the pre\nprocessing is standardized by removing \u2018<first_speaker>\u2019 tokens at the beginning of each utterance\nThe results are detailed in Table[I0jof Appendix D. We can observe from both this data, and the nev\ndata in Table] that the correlations for the word-overlap metrics are even lower than estimated ir\nprevious studies (Liu et al.| 2016} Galley et al.| 2015). In particular, this is the case for BLEU-4\nwhich has frequently been used for dialogue response evaluation (Ritter et al.)/201 If Sordoni et al.\n\n2015b}|Li et al.| 2015} |Galley et al.| 2015} /Li et al.| 2016a).\nWe can see from Table[3)that ADEM correlates far better with human judgement than the word-overlap\nbaselines. This is further illustrated by the scatterplots in Figure[3] We also compare with ADEM using\ntweet2vec embeddings for c, r, and #, which are computed at the character-level with a bidirectional\n\nGRU (Dhingra et al.|/2016), and obtain comparable but slightly inferior performance compared to\nusing VHRED embeddings.\nSystem-level correlations | We show the system-level correlations\nfor various metrics in Table[4] and present it visually in Figure[4] Each\npoint in the scatterplots represents a dialogue model; humans give\nlow scores to TFIDF and DE responses, higher scores to HRED and\nthe highest scores to other human responses. It is clear that existing\nword-overlap metrics are incapable of capturing this relationship for\neven 4 models. This renders them completely deficient for dialogue\nevaluation. However, ADEM produces the exact same model ranking\nas humans, achieving a significant Pearson correlation of 0.98P| Thus,\nADEM correlates well with humans both at the response and system\nlevel.\nGeneralization to previously unseen models When ADEM is used in practice, it will take as\ninput responses from a new model that it has not seen during training. Thus, it is crucial that\nADEM correlates with human judgements for new models. We test ADEM\u2019s generalization ability by\nperforming a leave-one-out evaluation. For each dialogue model that was the source of response data\nfor training ADEM (TF-IDF, Dual Encoder, HRED, humans), we conduct an experiment where we\ntrain on all model responses except those from the chosen model, and test only on the model that was\nunseen during training.\nMetric\nBLEU-1I\nBLEU-2\nBLEU-3\nBLEU-4\nROUGE\nADEM\n\nPearson\n-0.079 (0.921)\n0.308 (0.692)\n-0.537 (0.463)\n-0.536 (0.464)\n0.268 (0.732)\n0.981 (0.019)\nfable 4: System-level cor-\nelation, with the p-value in\nyrackets.\nThe results are given in Table [5] Overall, we observe that the ADEM model is very robust, and\nis capable of generalizing to new models in all cases. When testing the correlation on the entire\ntest set, the model achieves comparable correlations to the ADEM model that was trained on 25%\nNote that our word-overlap correlation results in Tabl re also lower than those presented in|Galley et al.\n). This is because Galley et al. measure corpus-level correlation, i.e. correlation averaged across different\n(of size 100) of the data, and pre-filter for high-quality reference responses\n>For comparison, BLEU achieves a system-level correlation of 0.99 on 5 models in the translation do-\n\nmain (Papineni et al. 2002).\nTest on full dataset\n\nTest on removed model responses\n\nData Removed\n\nSpearman\n\nPearson\n\nSpearman\n\nPearson\n\nTF-IDF\n\nDual Encoder\nHRED\nHuman\nAverage\n\n0.4097 (<0.001y\n0.4000 (<0.001)\n0.4128 (<0.001)\n0.4052 (<0.001)\n0.4069 (<0.001)\n\n0.3975 (<0.001)\n0.3907 (<0.001)\n0.3961 (<0.001)\n0.3910 (<0.001)\n0.3938 (<0.001)\n\n0.3931 (<0.001)\n0.4256 (<0.001)\n0.3998 (<0.001)\n0.4472 (<0.001)\n0.4164 (<0.001)\n\n0.3645 (<0.001)\n0.4098 (<0.001)\n0.3956 (<0.001)\n0.4230 (<0.001)\n0.3982 (<0.001)\n\n25% at random\n\n0.4077 (<0.001)\n\n0.3932 (<0.001)\nTable 5: Correlation for ADEM when various model responses are removed from the training set\nThe left two columns show performance on the entire test set, and the right two columns show\nperformance on responses only from the dialogue model not seen during training. The last row (25%\nat random) corresponds to the ADEM model trained on all model responses, but with the same amount\nof training data as the model above (i.e. 25% less data than the full training set).\nContext Reference re- Model responses Human ADEM\n\nsponse score score\nphoto to see my television de- yeah it was me . 1) mnot sure. i just don't know what to do withit. 3 1.602\nbut go to - some. some on hahai\u2019dkindafor- 2) you heard the horsepower productions remix of lee 1 1.513\n<url> - hehe <url> \u2014 it \u2014 gotten about it it scratch perry\u2019s \u2018exercising\u2019 off his \u2018mighty upsetter\u2019\nreally was you? i thought \u2014 was filmed a while album?\nppl were recognizing someone ago 3) you wont chug a fuzzy peach navel 1 1.744\nwho looked like you! were the 4) they were! 5 3.274\noysters worth the wait?\njust beat call of duty!! \u2014> want im in kenmore at 1) i'm gonna get a new phone some moro T 1.848\na cookie? \u2014 yes!! \u2014+ come the moment 2) no way man. 5 4.265\nget it 3) wow i just got a free pizza coupon! get yours 1 0.921\n\nbefore theres no more! <url>\n4) i\u2019m going to go to the mall. 1 2.634\n\n\u2018am | out of twitter jail yet? test- any news on meet- 1) 'm not sure if i'm going fo be able to get it. 3 T9512\ning > yeah. i posted bail + ing our user ? i 2) good to see another mac user in the leadership 4 1.417\nthanks. i am a right chatter goto the usonfri-. ranks\ntweetbox on sundays. same day and i don \u2018t \u2014 3) awww poor baby hope u get to feeling better soon. 2 1.123\nhappened last sunday lol want to miss any- maybe some many work days at piedmont\n\nthing arranged 4) did you tweet too much? 5 2.539\nTable 7: Examples of scores given by the ADEM model.\nQualitative Analysis To illustrate some strengths and\nweaknesses of ADEM, we show human and ADEM scores for\nsach of the responses to various contexts in Table[7] There\nare several instances where ADEM predicts accurately: in\nparticular, ADEM is often very good at assigning low scores\n0 poor responses. This seen in the first two contexts, where\nmost of the responses given a score of 1 from humans are\nziven scores less than 2 by ADEM. The single exception\nin response (4) for the second context seems somewhat\nappropriate and should perhaps have been scored higher\noy the human evaluator. There are also several instances\nwhere the model assigns high scores to suitable responses.\nis in the first two contexts.\n_ oo a a eA ON ON IIIA IID II LILIES\nconservative when predicting response scores. This is the\n\ncase in the third context, where the model assigns low scores to most of the responses that a human\nrated highly (although response (2) is arguably not relevant to the context). This behaviour is likely\ndue to the squared error loss used to train ADEM; since the model receives a large penalty for\nincorrectly predicting an extreme value, it learns to predict scores closer to the average human score\nless data selected at random. This is particularly surprising for the HRED model; in this case,\nADEM was trained only on responses that were written by humans (from retrieval models or human-\ngenerated), but is able to generalize to responses produced by a generative neural network model.\nThis demonstrates ADEM\u2019s ability to accurately score new neural network-based dialogue models.\nMetric scores # Examples\n\nHuman > 4 237 out of 616\n\nand ([BLEU-2] <2,\n|ROUGE| <2) 146 out of 237\n\nand |ADEM| > 4 60 out of 146\n\nand [ADEM] < 2 42 out of 237\nand (|BLEU-2| >4,\nor |ROUGE| +4) 14 out of 42\nTable 6: In 60/146 cases, ADEM scores\ngood responses (human score > 4)\nhighly when word-overlap metrics fail.\nThe bars around |metric| indicate that\nthe metric scores have been normalized.\nTable 9: Examples where both human and ADEM score the model response highly, while BLEU-?\nand ROUGE do not. These examples are drawn randomly (i.e. no cherry-picking) from the example:\nwhere ADEM outperforms BLEU-2 and ROUGE (as defined in the text). ADEM is able to correctl;\nassign high scores to short responses that have no word-overlap with the reference response. Thi\nbars around |metric| indicate that the metric scores have been normalized.\nCorrelation with response length One implicit assumption in the ADEM model is that the human\nevaluations of model responses is absolutely correct, including the biases that humans exhibit when\nevaluating dialogues. For example, it has been shown that humans have a tendency to give a higher\nrating to shorter responses than to longer responses (Serban et al.|/2016b), as shorter responses are\noften more generic and thus are more likely to be suitable to the context. This affects dialogue\nresponse evaluation: we calculated the test set correlation between response length and the human\nscore, and obtained a significant Pearson correlation of 0.27, and a Spearman correlation of 0.32\nIf the assumption that human evaluators are absolutely correct is not accurate, it may be desirable\nto remove human biases in an automatic evaluation model to improve the model\u2019s generalization\ncapabilities. This is an important direction for future work.\nImprovement over word-overlap metrics Next, we analyze more\n\nprecisely how ADEM outper\n\nforms traditional word-overlap metrics such as BLEU-2 and ROUGE. We first normalize the metric\nscores to have the same mean and variance as human scores, clipping the resulting scores to the\n\nrange [1,5] (we assign raw scores of 0 a normalized score of 1). We\n\nindicate normalization with\n\nvertical bars around the metric. We then select all of the good responses that were given low scores\n\nby word-overlap metrics (i.e. responses which humans scored as 4 or\n\nhigher, and which |BLEU-2\n\nand |ROUGE| scored as 2 or lower). The results are summarized in Table|6| of the 237 responses that\n\nhumans scored 4 or higher, most of them (147/237) were ranked very\nROUGE. This quantitatively demonstrates what we argued qualitativel\n\npoorly by both BLEU-2 and\ny in Figure[T} a major failure\n\nof word-overlap metrics is the inability to consider reasonable responses that have no word-overlap\n\nwith the reference response. We can also see that, in almost half (60/\n\n47) of the cases where both\n\nBLEU-2 and ROUGE fail, |ADEM| is able to correctly assign a score greater than 4. For comparison\n\nthere are only 42 responses where humans give a score of 4 and |ADEM|\nonly 14 of these are assigned a score greater than 4 by either |BLEU-2\n\ngives a score less than 2, an\u00a2\nor IROUGE|.\nTo provide further insight, we give specific examples of\nresponses that are scored highly (> 4) by both humans\nand |ADEM|, and poorly (< 2) by both |BLEU-2| and\n|ROUGE| in Table|9| We draw 3 responses randomly\n(i.e. no cherry-picking) from the 60 test set responses\nthat meet this criteria. We can observe that ADEM is\nable to recognize short responses that are appropriate\nto the context, without word-overlap with the reference\nresponse. This is even the case when the model and\nreference responses have very little semantic similarity,\nas in the first and third examples in Table[9]\nFinally, we show the behaviour of ADEM when there is\na discrepancy between the Jena of the reference and\nmodel responses. In (Liv et al.} (Liu et al.||2016), the authors show\nthat word-overlap n Give t] such as BLEU 1, BLEU-2,\nand METEOR evxhithit a hiag in this scenario: they tend ti\nContext Reference response Model re- Human |BLEU-2| |ROUGE| |ADEM|\nsponse score score score score\n\nid recommend <url> - or build buy an an htpe with xmbe is what i because. 5 1.0 1.0 4.726\n\nhtpe and put <url> onit. > you're the run. but i \u2019ve decked out my it\u2019s __bril-\n\nsome nd person this week that\u2019s recom- setup. i\u2019ve got <number> tb \u2014_liant\n\nmended roku to me. of data on my home server\n\nimma be an auntie this weekend. i guess lol you sometiming haha, 5 1.0 1.0 4.201\n\ni have to go albany. herewego > u sup- anyway,\n\nposed to been here \u2014 i come off nd on. how\u2019re\n\n\u2014- never tell me smh you?\n\nmy son thinks she is plain. and the girl you are too kind for words . i will do 5 1.0 1.0 5.0\n\nthat plays her sister. seekhelp4him? \u2014\nsend him this. he\u2019ll thank you. <url>\nMean score\nAw<6 Aw>6_ p-value\n(n=312) \u2014 (n=304)\n\nROUGE 0.042 0.031 < 0.01\nBLEU-2 \u2014 0.0022 0.0007 0.23\nADEM 2.072 2.015 0.23\n\nHuman 2.671 2.698 0.83\nTable 8: Effect of differences in response\nlength on the score, Aw = absolute differ-\nence in #words between the reference re-\nsponse and proposed response. BLEU-1,\nBLEU-2, and METEOR have previously\nbeen shown to exhibit bias towards similar-\n\nlength responses (Liu et al.|/2016).\ncloser in length to the reference response|\u00b0| However, humans do not exhibit this bias; in other words\nthe quality of a response as judged by a human is roughly independent of its length. In Table|8} we\nshow that ADEM also does not exhibit this bias towards similar-length responses.\nRelated to our approach is the literature on novel methods for the evaluation of machine translatiot\nsystems, especially through the WMT evaluation task (Callison-Burch et al.| Machacek 4\n\n{2014} Stanojevic et al.||2015). In particular, (2015) have recently proposed t\nevaluate machine translation systems using Tree-LSTMs. Their approach differs from ours as, in th\n\ndialogue domain, we must additionally condition our score on the context of the conversation, whic!\nis not necessary in translation.\nSeveral recent approaches use hand-crafted reward features to train dialogue models using rein-\nforcement learning (RL). For example, use features related to ease of answering\nand information flow, and|Yt (2016) use metrics related to turn-level appropriateness and\nconversational depth. These metrics are based on hand-crafted features, which only capture a small\nset of relevant aspects; this inevitably leads to sub-optimal performance, and it is unclear whether such\nobjectives are preferable over retrieval-based cross-entropy or word-level maximum log-likelihood\nobjectives. Furthermore, many of these metrics are computed at the conversation-level, and are\nnot available for evaluating single dialogue responses. The metrics that can be computed at the\nresponse-level could be incorporated into our framework, for example by adding a term to equation[I\nconsisting of a dot product between these features and a vector of learned parameters.\nThere has been significant work on evaluation methods for task-oriented dialogue systems, which\nattempt to solve a user\u2019s task such as finding a restaurant. These methods include the PARADISE\n\nframework (Walker et al.|!1997) and MeMo (Moller et al.||2006), which consider a task completion\n\nsignal. Our models do not attempt to model task completion, and thus fall outside this domain."}, {"section_index": "4", "section_name": "7 DISCUSSION", "section_text": "The evaluation model proposed in this paper favours dialogue models that generate responses that are\nrated as highly appropriate by humans. It is likely that this property does not fully capture the desirec\nend-goal of chatbot systems. For example, one issue with building models to approximate humar\njudgements of response quality is the problem of generic responses. Since humans often provide higt\nscores to generic responses due to their appropriateness for many given contexts, a model trainec\nto predict these scores will exhibit the same behaviour. An important direction for future work is\nmodifying ADEM such that it is not subject to this bias. This could be done, for example, by censoring\nADEM\u2019s representations such that they do not contain any information\nabout length. Alternatively, one could build a second evaluation model that assigns a score basec\non how easy it is to distinguish the dialogue model responses from human responses. In this case, <\nmodel that generates generic responses will easily be distinguishable and obtain a low score.\nAn important direction of future research is building models that can evaluate the capability of\na dialogue system to have an engaging and meaningful interaction with a human. Compared to\nevaluating a single response, this evaluation is arguably closer to the end-goal of chatbots. However\nsuch an evaluation is extremely challenging to do in a completely automatic way. We view the\nevaluation procedure presented in this paper as an important step towards this goal; current dialogue\nsystems are incapable of generating responses that are rated as highly appropriate by humans, and we\nbelieve our evaluation model will be useful for measuring and facilitating progress in this direction.\nNote that, for our dataset, BLEU-2 almost exclusively assigns scores near 0 for both Aw < 6 and Aw > 6\nresulting in a p-value >0.05.\nWe use the Twitter Corpus to train our models as it contains a broad range of non-task-oriented\nconversations and has has been used to train many state-of-the-art models. However, our model\ncould easily be extended to other general-purpose datasets, such as Reddit, once similar pre-trained\nmodels become publicly available. Such models are necessary even for creating a test set in a new\ndomain, which will help us determine if ADEM generalizes to related dialogue domains. We leave\ninvestigating the domain transfer ability of ADEM for future work."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv: 1607.06450, 2016\nJ. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv: 1607.06450, 2016.\n\nS. Banerjee and A. Lavie. Meteor: An automatic metric for mt evaluation with improved correlation\nwith human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation\nmeasures for machine translation and/or summarization, volume 29, pp. 65-72, 2005.\n\nY. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is\ndifficult. EEE transactions on neural networks, 5(2):157\u2014-166, 1994.\n\nS. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz, and S. Bengio. Generating sentences\nfrom a continuous space. COLING, 2016.\n\nC. Callison-Burch, P. Koehn, C. Monz, and O. F. Zaidan. Findings of the 2011 workshop on statistical\nmachine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pp.\n22-64. Association for Computational Linguistics, 2011.\n\nB. Chen and C. Cherry. A systematic comparison of smoothing techniques for sentence-level bleu.\nACL 2014, pp. 362, 2014.\n\nJ. Cohen. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial\ncredit. Psychological bulletin, 70(4):213, 1968.\n\nT. Cooijmans, N. Ballas, C. Laurent, and A. Courville. Recurrent batch normalization. arXiv preprint\narXiv: 1603.09025, 2016.\n\nB. Dhingra, Z. Zhou, D. Fitzpatrick, M. Muehl, and W. W. Cohen. Tweet2vec: Character-based\ndistributed representations for social media. arXiv preprint arXiv: 1605.03481, 2016.\nH. Edwards and A. Storkey. Censoring representations with an adversary. JCLR, 2016.\nP. Gage. A new algorithm for data compression. The C Users Journal, 12(2):23-38, 1994\nWe\u2019d like to thank Casper Liu for his help with the correlation code, Laurent Charlin for helpful\ndiscussions on the data collection, Jason Weston for suggesting improvements in the experiments,\nand Jean Harb and Emmanuel Bengio for their debugging skills. We gratefully acknowledge support\nfrom the Samsung Institute of Advanced Technology, the National Science and Engineering Research\nCouncil, and Calcul Quebec. We\u2019d also like to thank the developers of Theano (Team et al.|/2016).\nR. Gupta, C. Orasan, and J. van Genabith. Reval: A simple and effective machine translation\n\nevaluation metric based on recurrent neural networks. In Proceedings of the 2015 Conference on\nEmpirical Methods in Natural Language Processing (EMNLP), 2015.\n\nS. Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universitit\nMiinchen, pp. 91, 1991.\n\nS. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735\u20141780,\n1997.\ninternal covariate shift. arXiv preprint arXiv:1502.03167, 2015.\n\nA. Kannan, K. Kurach, S. Ravi, T. Kaufmann, A. Tomkins, B. Miklos, G. Corrado, L. Lukacs, M.\nGanea, P. Young, et al. Smart reply: Automated response suggestion for email. In Proceedings of\nthe ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), volume 36, pp.\n495-503, 2016.\n\nD. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980,\n2014.\n\nR. Kiros, Y. Zhu, R. R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler. Skip-thought\nvectors. In Advances in Neural Information Processing Systems, pp. 3276-3284, 2015.\n\nJ. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. A diversity-promoting objective function for\nneural conversation models. arXiv preprint arXiv: 1510.03055, 2015.\n\nJ. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. A persona-based neural conversation model.\narXiv preprint arXiv: 1603.06155, 2016a.\n\nJ. Li, W. Monroe, A. Ritter, and D. Jurafsky. Deep reinforcement learning for dialogue generation.\narXiv preprint arXiv: 1606.01541, 2016b.\n\nC.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches\nout: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004.\n\nC.-W. Liu, R. Lowe, L. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. How not to evaluate your\ndialogue system: An empirical study of unsupervised evaluation metrics for dialogue response\ngeneration. arXiv preprint arXiv: 1603.08023, 2016.\n\nR. Lowe, N. Pow, IL. Serban, and J. Pineau. The ubuntu dialogue corpus: A large dataset for research\nin unstructured multi-turn dialogue systems. arXiv preprint arXiv: 1506.08909, 2015.\n\nM. Machacek and O. Bojar. Results of the wmt14 metrics shared task. In Proceedings of the Ninth\nWorkshop on Statistical Machine Translation, pp. 293-301. Citeseer, 2014.\nneural conversation models. arXiv preprint arXiv:1510.03055 ; 2015.\n\nLi, M. Galley, C. Brockett, J. Gao, and B. Dolan. A persona-based neural conversation model.\narXiv preprint arXiv: 1603.06155, 2016a.\n\nLi, W. Monroe, A. Ritter, and D. Jurafsky. Deep reinforcement learning for dialogue generation.\narXiv preprint arXiv: 1606.01541, 2016b.\n\n.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches\nout: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004.\n\n.-W. Liu, R. Lowe, I. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. How not to evaluate your\ndialogue system: An empirical study of unsupervised evaluation metrics for dialogue response\ngeneration. arXiv preprint arXiv: 1603.08023, 2016.\n\n. Lowe, N. Pow, I. Serban, and J. Pineau. The ubuntu dialogue corpus: A large dataset for research\nin unstructured multi-turn dialogue systems. arXiv preprint arXiv: 1506.08909, 2015.\neee ne een eee ee I III III END DEINE NOD IN IID IIA AIE\n\nWorkshop on Statistical Machine Translation, pp. 293-301. Citeseer, 2014.\n\nPAV MS BEER AN EEEIES\n\nJ. Markoff and P. Mozur. For sympathetic ear, more chinese turn to smartphone program. NY Times,\n2015.\n\nS. Miller, R. Englert, K.-P. Engelbrecht, V. V. Hafner, A. Jameson, A. Oulasvirta, A. Raake, and N.\nReithinger. Memo: towards automatic usability evaluation of spoken dialogue services by user\nerror simulations. In INTERSPEECH, 2006.\n\nK. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine\ntranslation. In Proceedings of the 40th annual meeting on association for computational linguistics,\npp. 311-318. Association for Computational Linguistics, 2002.\n\nK. Pearson. Principal components analysis. The London, Edinburgh and Dublin Philosophical\nVina neine apd TInnenal E(9\\:5\u00a3K 1001\n.. Sennrich, B. Haddow, and A. Birch. Neural machine translation of rare words with subword units.\narXiv preprint arXiv: 1508.07909, 2015.\n\n. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building end-to-end dialogue\nsystems using generative hierarchical neural network models. In AAAI, pp. 3776-3784, 2016a.\n\n. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A hierarchical\nlatent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv: 1605.06069,\n\n2016b.\nL. Shang, Z. Lu, and H. Li. Neural responding machine for short-text conversation. arXiv preprint\narXiv: 1503.02364, 2015.\n\nA. Sordoni, Y. Bengio, H. Vahabi, C. Lioma, J. Grue Simonsen, and J.-Y. Nie. A hierarchical recurrent\nencoder-decoder for generative context-aware query suggestion. In Proceedings of the 24th ACM\nInternational on Conference on Information and Knowledge Management, pp. 553-562. ACM,\n2015a.\n\nA. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J.-Y. Nie, J. Gao, and B. Dolan.\n\nA neural network approach to context-sensitive generation of conversational responses. arXiv\nsae et eV TENK NKTIIA W1Sh\nJ. Weizenbaum. ELIZAa computer program for the study of natural language communication between\nman and machine. Communications of the ACM, 9(1):36-45, 1966.\n\nZ. Yu, Z. Xu, A. W. Black, and A. I. Rudnicky. Strategy and policy learning for non-task-oriented\nconversational systems. In /7th Annual Meeting of the Special Interest Group on Discourse and\nDialogue, pp. 404, 2016."}, {"section_index": "6", "section_name": "APPENDIX A: FURTHER NOTES ON CROWDSOURCING DATA COLLECTION", "section_text": "Before conducting the primary crowdsourcing experiments to collect the dataset in this paper, we\nran a series of preliminary experiments to see how AMT workers responded to different questions\nUnlike the primary study, where we asked a small number of overlapping questions to determine the\n\u00ab score and filtered users based on the results, we conducted a study where all responses (40 in total\nfrom 10 contexts) were overlapping. We did this for 18 users in two trials, resulting in 153 pair-wise\ncorrelation scores per trial.\nIn the first trial, we asked the following questions to the users, for each response:\nReYN SE\n\nHow appropriate is the response overall? (overall, scale of 1-5)\nHow on-topic is the response? (topicality, scale of 1-5)\nHow specific is the response to some context? (specificity, scale of 1-5)\n\nHow much background information is required to understand the context? (backg!\nscale of 1-5)\nTo better visualize the data, we produce scatterplots showing the distribution of scores for differen\nresponses, for each of the four questions in our survey (Figure[5p. We can see that the overall anc\ntopicality scores are clustered for each question, indicating high agreement. However, these cluster:\nare most often in the same positions for each response, which indicates that they are highly correlatec\nwith each other. Specificity and background information, on the other hand, show far fewer cluster:\nindicating lower inter-annotator agreement. We conjectured that this was partially because the term:\nspecificity\u2019 and \u2018background information\u2019, along with our descriptions of them, had a high cognitiv\nload, and were difficult to understand in the context of our survey.\nTo test this hypothesis, we conducted a new survey where we tried to ask the questions for specificity\nand background in a more intuitive manner. We also changed the formulation of the background\n\nquestion to be a binary 0-1 decision of whether users understood the context. We asked the following\nquestions:\n1. How appropriate is the response overall? (overall, scale of 1-\n2. How on-topic is the response? (topicality, scale of 1-5)\n3. How common is the response? (informativeness, scale of 1-5)\n\nAd Noes the context make cence? (context scale of 0-1)\nWe also clarified our description for the third question, including providing more intuitive examples\nInterestingly, the inter-annotator agreement on informativeness \u00ab = 0.31 was much higher than that\nfor specificity in the original survey. Thus, the formulation of questions in a crowdsourcing survey\nhas a large impact on inter-annotator agreement. For the context, we found that users either agreed\nhighly (\u00ab > 0.9 for 45 participants), or not at all (\u00ab < 0.1 for 113 participants).\nWe also experimented with asking the overall score on a separate page, before asking questions\n2-4, and found that this increased the \u00ab agreement slightly. Similarly, excluding all scores where\nparticipants indicated they did not understand the context improved inter-annotator agreement slightly\nDue to these observations, we decided to only ask users for their overall quality score for each\nresponse, as it is unclear how much additional information is provided by the other questions in the\ncontext of dialogue. We hope this information is useful for future crowdsourcing experiments in the\ndialogue domain.\nNote that we do not ask for fluency, as the 3/4 responses for each context were written by a human\n(including retrieval models). We also provided the AMT workers with examples that have high\ntopicality and low specificity, and examples with high specificity and low topicality. The background\nquestion was only asked once for each context.\nWe observed that both the overall scores and topicality had fairly high inter-annotator agreement (as\nshown in Table[2), but were strongly correlated with each other (i.e. participants would often put the\n\nsame scores for topicality and overall score). Conversely, specificity (k = 0.12) and background\n(\u00ab = 0.05) had very low inter-annotator agreements.\nOverall\n\n2\n3\n\nole\nee + ois\n- ~~ fo\n\u2014- fe\nee le\n\nPerr ry\n\nSpecificity\nFigure 5: Scatter plots showing the distribution of scores (vertical axis) for different responses\n(horizontal axis), for each of the four questions in our survey. It can be seen that the overall and\ntopicality scores are clustered for each question, indicating high agreement, while this is not the case\nfor specificity or background information. Note that all scores are normalized based on a per-use1\nbasis, based on the average score given by each user.\n=. -eh\nwe ee js\n\nrr a"}, {"section_index": "7", "section_name": "APPENDIX B: METRIC DESCRIPTION", "section_text": "where k indexes all possible n-grams of length n and h(k,r) is the number of n-grams k in r. Note\nthat the min in this equation is calculating the number of co-occurrences of n-gram k between the\nground truth response r and the proposed response 7, as it computes the fewest appearances of k in\neither response. To avoid the drawbacks of using a precision score, namely that it favours shorter\n(candidate) sentences, the authors introduce a brevity penalty. BLEU-N, where N is the maximum\nlength of n-grams considered, is defined as:\nG, is a weighting that is usually uniform, and 6(-) is the brevity penalty. The most commonly used\nversion of BLEU assigns N = 4. Modern versions of BLEU also use sentence-level smoothing, as\nthe geometric mean often results in scores of 0 if there is no 4-gram overlap\nNote that BLEU is usually calculated at the corpus-level, and was originally designed for use with\nmultiple reference sentences.\nMETEOR The METEOR metric (Banerjee & Lavie| was introduced to address several\nweaknesses in BLEU. It creates an explicit alignment between the candidate and target responses.\nThe alignment is based on exact token matching, followed by WordNet synonyms, stemmed tokens,\nand then paraphrases. Given a set of alignments, the METEOR score is the harmonic mean of\nprecision and recall between the proposed and ground truth sentence.\nGiven a set of alignments m, the METEOR score is the harmonic mean of precision P,,, and recal\nR,,, between the candidate and target sentence.\nThe penalty term Pen is based on the \u2018chunkiness\u2019 of the resolved matches. We use the defau!\nvalues for the hyperparameters a, 7, and 6.\nROUGE ROUGE ) is a set of evaluation metrics used for automatic summarization\nWe consider ROUGE-L, which is a F-measure based on the Longest Common Subsequence (LCS\nbetween a candidate and target sentence. The LCS is a set of words which occur in two sentences ir\nthe same order; however, unlike n-grams the words do not have to be contiguous, i.e. there can be\nother words in between the words of the LCS. ROUGE-L is computed using an F-measure betweer\nthe reference response and the proposed response.\nwhere /(c;, s;;) is the length of the LCS between the sentences. ( is usually set to favour recal\n(8 = 1.2).\nay \u2014 2p min(h(k,r), h(k, ri)\nPal?) =\n3LEU-N := D(r,7) exp(S_ Bn log Pp(r,#))\n\nn=1\nJ\n\nP= max fracl (ci, \u00a7 ig leag|\n\noe _\u2014\n2.\nROUGE, (c,, S;) = OS PORE"}, {"section_index": "8", "section_name": "APPENDIX C: LATENT VARIABLE HIERARCHICAL RECURRENT\nENCODER-DECODER (VHRED)", "section_text": "The VHRED model is an extension of the original hierarchical recurrent encoder-decoder (HRED)\nmodel with an additional component: a high-dimensional stochastic latent\nvariable at every dialogue turn. The dialogue context is encoded into a vector representation using\nthe utterance-level and context-level RNNs from our encoder. Conditioned on the summary vector at\neach dialogue turn, VHRED samples a multivariate Gaussian variable that is provided, along with\nthe context summary vector, as input to the decoder RNN, which in turn generates the response\nword-by-word. We use representations from the VHRED model as it produces more diverse and\ncoherent responses compared to its HRED counterpart.\nThe VHRED model is trained to maximize a lower-bound on the log-likelihood of generating the\nnext response:\nwhere KL[Q||P] is the Kullback-Leibler (KL) divergence between distributions Q and P. The\n\ndistribution Qy,(Zn | Wi,.--,Ww) = N (Mposterior(W1;-++;Wn)+ Uposterior(W1;.--;Wn)) is the ap-\nproximate posterior distribution (or recognition model) which approximates the intractable true\nposterior distribution P,,(z, | wi,..., Wy). The posterior mean posterior and covariance Uposterior\n\n(as well as that of the prior) are computed using a feed-forward neural network, which takes as input\nthe concatenation of the vector representations of the past utterances and that of the current utterance\nThe multivariate Gaussian latent variable in the VHRED model allows modelling ambiguity and\nuncertainty in the dialogue through the latent variable distribution parameters (mean and variance)\nThis provides a useful inductive bias, which helps VHRED encode the dialogue context into a\nreal-valued embedding space even when the dialogue context is ambiguous or uncertain, and it helps\nVHRED generate more diverse responses.\ndecoder initial hidden state\n\nlatent variable\n~s\n\nW3,1 tae\n\nprediction\n\nprior parameterization\n\nencoder\nhidden state\n\nBe\n\ncontext\n\nwii twa wine\n\n~\nSs\n\nhidden state @ @) q\n\nwe eee wr\nFigure 6: The VHRED model used for pre-training. The hierarchical structure of the RNN encoder is\nshown in the red box around the bottom half of the figure. After training using the VHRED procedure.\nthe last hidden state of the context-level encoder is used as a vector representation of the input text."}, {"section_index": "9", "section_name": "HYPERPARAMETERS", "section_text": "New results on {Liu et al.|(2016) data _ In order to en-\n\nsure that the oe ae, word-overlap metrics\nand human judgements were comparable across datasets,\nwe standardized the processing of the evaluation dataset\n\nfrom (2016). In particular, the original data from\nLiu et al. 16) has a token (either \u2018<first_speaker>\u2019,\n\n'<second_speaker>\u2019, or \u2018<third_speaker>\u2019) at the begin-\nning of each utterance. This is an artifact left-over by\nthe processing used as input to the hierarchical recurrent\nencoder-decoder (HRED) model [2016ab.\nRemoving these tokens makes sense for establishing the\nability of word-overlap models, as they are unrelated to\nthe content of the tweets.\nWe perform this processing, and report the updated results\u201d\nfor word-overlap metrics in Table[10] Surprisingly, almost\nall significant correlation disappears, particularly for all\nforms of the BLEU score. Thus, we can conclude that the\nword-overlap metrics were heavily relying on these tokens\nmodel responses and reference responses.\n{valuation speed = An important property of evaluation models 1s speed.\n\nVe show the evaluation time on the test set for ADEM on both CPU and\n| Titan X GPU (using Theano, without cudNN) in Table[11] When run\nm GPU, ADEM is able to evaluate responses in a reasonable amount of\nime (approximately 2.5 minutes). This includes the time for encoding\nhe contexts, model responses, and reference responses into vectors with\nhe hierarchical RNN, in addition to computing the PCA projection, but\nloes not include pre-training with VHRED. For comparison, if run on a\nest set of 10,000 responses, ADEM would take approximately 45 minutes.\n\nMetr\nADE}\nADE}\n\nTable\ntime o\n\nThis is significantly less time consuming than setting up human experiments at a\nve have not yet made any effort to optimize the speed of the ADEM model.\nLearning curves To show that our learning procedure for ADEM really is necessary, and that\nthe embeddings produced by VHRED are not sufficient to evaluate dialogue systems, we plot the\nSpearman and Pearson correlations on the test set as a function of the number of epochs in Figure\nIt is clear that, at the beginning of training, when the matrices M/ and N have been initialized\nto the identity, the model is incapable of accurately predicting human scores, and its correlation is\napproximately 0.\nWhen evaluating our model, we conduct early stopping on an external validation set to obtain the best\nparameter setting. We similarly choose our hyperparameters (PCA dimension n, L1 regularization\npenalty +, learning rate a, and batch size b) based on validation set results. Our best ADEM model\nused y = 0.02, a = 0.01, and b = 16. For ADEM with tweet2vec embeddings, we did a similar\nhyperparameter searched, and used n = 150, y = 0.01, a = 0.01, and b = 16.\nMetric Spearman Pearson\n\nBLEU-I -0.026 (0.80) 0.016 (0.87)\nBLEU-2 0.065 (0.52) \u2014_0.080 (0.43)\nBLEU-3 0.139 (0.17) \u2014_-0.088 (0.39)\nBLEU-4 = 0.139 (0.17) 0.092 (0.36)\nROUGE _ -0.083 (0.41) -0.010 (0.92)\nTable 10: Correlations between word-\noverlap metrics and human judgements\non the dataset from|Liu et al.|(2016), af-\nter removing the speaker tokens at the\nbeginning of each utterance. The corre-\nlations are even worse than estimated in\nthe original paper, and none are signifi-\ncant.\n05\n\n0a]\n\n03]\n\n02\n\nED cy @o co\nNumber of epochs\n\n(a) Spearman correlation\n\nED cy @o co\nNumber of epochs\n\n(b) Pearson correlation\nTable 12: Examples where a human and either BLEU-2 or ROUGE (after normalization) score the\nmodel response highly (> 4/5), while the ADEM model scored it poorly (< 2/5). These examples\nare drawn randomly (i.e. no cherry-picking). The bars around |metric| indicate that the metric scores\nhave been normalized.\nFailure analysis | We now conduct a failure analysis of the ADEM model. In particular, we look\nat two different cases: responses where both humans and (normalized) ROUGE or BLEU-2 score\nhighly (a score of 4 out of 5 or greater) while ADEM scores poorly (2 out of 5 or lower), and the\nconverse, where ADEM scores the response highly while humans and either ROUGE or BLEU-2\nscore it poorly. We randomly sample (i.e. without cherry picking) three examples of each case, which\nare shown in Tables\nFrom Table the cases where ADEM misses a good response, we can see that there are a variety\nof reasons for this cause of failure. In the first example, ADEM is not able to match the fact that the\nmodel response talks about sleep to the reference response or context. This is possibly because the\nutterance contains a significant amount of irrelevant information: indeed, the first two sentences are\nnot related to either the context or reference response. In the second example, the model response\ndoes not seem particularly relevant to the context \u2014 despite this, the human scoring this example\ngave it 4/5. This illustrates one drawback of human evaluations; they are quite subjective, and often\nhave some noise. This makes it difficult to learn an effective ADEM model. Finally, ADEM is unable\nto score the third response highly, even though it is very closely related to the reference response.\nWe can observe from the first two examples in Table|13| where the ADEM model erroneously ranks\nthe model responses highly, that ADEM is occasionally fooled into giving high scores for responses\nthat are completely unrelated to the context. This may be because both of the utterances are short\nand short utterances are ranked higher by humans in general since they are often more generic (as\ndetailed in Section|5). In the third example, the response actually seems to be somewhat reasonable\n\ngiven the context; this may be an instance where the human evaluator provided a score that was too\nlow.\nFigure 7: Plots showing the Spearman and Pearson correlations on the test set as ADEM trains. At\nthe beginning of training, the model does not correlate with human judgements.\nContext Reference Model response = Human = |BLEU-2|_ |ROUGE| |ADEM|\n\nresponse score score score score\nwhat theme do you guys want next on tumblr? maybe you need i'm really im- 4 2.33 3.0 1.0\nwe've had mariskamommymoments what do abit more sleep pressed. -\u2014_ first\n\nyou want to see next? \u2014> im sorry. hope you guy to said that\n\nfeel better soon! \u2014> it will wear off. just hate p what's time?\n\nfeeling like this > im sure it will! just relax sleep late its not\n\nand take your time \u2014 i\u2019m okay. just overly good. i\u2019m worried\n\ntired\n\nT some pm syria - the editor of syrian govern- __msmis very simi- _imnotsureifim 4 233 475 1.22\n\nment daily tishrin said she had been sacked \u2014larthe worldover _going to be able\nover remarks to al <url> \u2014+ replaced by |! tied to the gov- to see the <unk>.\n\nbusiness editor! sounds like the states lol ernment . im not sure if i\nshould be able to\nsee it.\nwonder how Tong it will take wayne rooney thanks . i won- thanks let me see 5 233 424 1.33\nto get himself into trouble on twitter. odds? der why it didn\u2019t if this one works\n\u2014+ probably. a long time, because i used \u2014 work ! ?\n\nthe address in the paper to find the page and it\ndoesn\u2019t exist! + here you go\nTable 13: Examples where a human and either BLEU-2 or ROUGE (after normalization) score the\nmodel response low (< 2/5), while the ADEM model scored it highly (> 4/5). These examples are\n\ndrawn randomly (i.e. no cherry-picking). The bars around |metric| indicate that the metric scores\nhave been normalized.\nData efficiency | How much data is required to train ADEM? We conduct an experiment where\nwe train ADEM on different amounts of training data, from 5% to 100%. The results are shown ir\nTable[14] We can observe that ADEM is very data-efficient, and is capable of reaching a Spearmat\ncorrelation of 0.4 using only half of the available training data (1000 labelled examples). ADEM\ncorrelates significantly with humans even when only trained on 5% of the original training data (10(\nlabelled examples).\nTable 14: ADEM correlations when trained on different amounts of data\nContext Reference re- Model re- Human |BLEU-2| |ROUGE| |ADEM|\nsponse sponse score score score score\nTage slightly dissipated. wouldn't have both- i m going to do a some. some 2 253 1.57 438\nered restoring my phone but i need it to moan little wee blog about _\u2014_ unfortunately\nat tomorrow. \u2014> speaking of moaning. i\u2019mac- it too . all nice and\ntually going to email that chap that letter right measured , of course\nnow. \u2014 good plan .\nhigh school flings college relationships > i word. i ve seen king james T 253 1.57 35.0\nseems like the other way around from wat i\u2019ve a little of both\nseen more of the college\nthough\nis it getting light outside? i swear itlooks blue. for you, im Stay-_im going to 1 253 137 35.0\n\n\u2014 time to go to sleepppy\n\ning up\n\nthe beach.\nTraining data % Spearman p-value Pearson __ p-value\n100 % of data 0.414 < 0.001 0.395 < 0.001\n75 % of data 0.408 < 0.001 0.393 < 0.001\n50 % of data 0.400 < 0.001 0.391 < 0.001\n25 % of data 0.330 < 0.001 0.331 < 0.001\n10 % of data 0.245 < 0.001 0.265 < 0.001\n\n5 % of data 0.098 0.015 0.161 < 0.001"}]
r17RD2oxe
[{"section_index": "0", "section_name": "DEEP NEURAL NETWORKS AND THE TREE OF LIF!", "section_text": "Yan Wang* Kun He!\nIn Evolutionary Biology, species close in the tree of evolution are identified by\n\nsimila\n\nr visual features. In computer vision, deep neural networks perform image\n\nclassification by learning to identify similar visual features. This leads to an in-\n\nteresti\nconstr\n\nng question: is it possible to leverage the advantage of deep networks to\nuct a tree of life? In this paper, we make the first attempt at building the\n\nphylogenetic tree diagram by leveraging the high-level features learned by deep\n\nneura\nsimila\n\nnetworks. Our method is based on the intuition that if two species share\nr features, then their cross activations in the softmax layer should be high.\n\nBased on the deep representation of convolutional neural networks trained for im-\n\nage C\n\nassification, we build a tree of life for species in the image categories of\n\nImageNet. Further, for species not in the ImageNet categories that are visually\n\nsimila\nsame\n\nr to some category, the cosine similarity of their activation vectors in the\nayer should be high. By applying the inner product similarity of the activa-\n\ntion vectors at the last fully connected layer for different species, we can roughly\nbuild their tree of life. Our work provides a new perspective to the deep repre-\nsentation and sheds light on possible novel applications of deep representation to\nother areas like Bioinformatics."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep learning transforms the data into compact intermediate representations akin to principal com:\nponents, and derives layered structures by removing the redundancy in representations (i Deng\n\nIn recent years, deep learning has demonstrated great success with significant improve\nment in various artificial intelligence applications, including speech recognition\n\nimage recognition (Ciresan et al.|/2012| {2012}, and natural language process\ning (Vinyals et al.|!2015||Socher et al.||2013).\nConvolutional Neural Networks (CNNs) are mainly designed for image and video recognition. Typ\nical CNN architecture alternates convolutional layers and pooling layers, followed by several full\nconnected or sparsely connected layers with a final softmax as the classification layer. Milestone\ninclude the 16-layer AlexNet (Krizhevsky et al.|/2012), the 19-layer VGG Simonyan & Zisserman\n2014), and the 22-layer GoogleNet (Szegedy et al.]/2015). By adding identity function as a shor\ncut,|He et al. are able to build a substantially deeper ResNet with 152 layers, which receivec\nthe first place on the ILSVRC 2015 image classification task (Russakovsky et al.|/2015). Other very\ndeep networks include the highway network with depths up to 100 layers (Srivastava et al.|\nEldan & Shamir] (2016) provide a theoretical justification that reveals the utility of having deepe\n\nnetworks rather than wider networks, implying that future progress will lead to the development o\neven deeper networks.\nUnderstanding the deep representations of neural networks _ has become increasingly difficul\nas the state-of-the-art models have more layers. This problem is important because it will help u:\nunderstand the intrinsic mechanism of deep neural networks and explore possible novel application:\n\nbased on the understanding. |Ballester & de Aratijo|(2016) show how CNNs, trained to identify ob\njects primarily in photos, could be used for abstract sketch recognition. (2015alb) utilize\nJohn E. Hopcroft, Yu Sun\nComputer Science Department\nCornell University\n{jeh, ys646}$@cs.cornell.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "the correlations between feature maps to synthesize natural textures and transfer artistic style with\nhigh perceptual quality. In Bioinformatics, deep neural networks are used for the analysis of medi-\n\ncal images for cancer detection (Cirean et al.|/2013) as well as drug discovery and toxicology (Dah\n2015}/Wallach et al.|/2015). A deep-learning approach based on the\nncoder architecture has b tG\n\neen adopted to predict Gene Ontology annotations and gene-function\n\nautoe!\n\nrelationships (Chicco et al.|2\nThe Tree of Life refers to the compilation of a comprehensive phylogenetic (or evolutionary)\ndatabase rooted at the last universal common ancestor of life on Earth. Over the course of hundreds\nof millions of years, the splitting and subsequent divergence of lineages has produced the tree of life,\nwhich has as its leaves the many species of organisms ). Here we refer to a phyloge-\nnetic tree, evolutionary tree or tree of life as a branching diagram showing the inferred genealogical\nrelationships (Evaluate how close two species are in the evolutionary history, as evaluated by ob-\nserved heritable traits, such as DNA sequences) among various biological species (Hug et al.|\nThis is an important problem in evolutionary biology and many attempts have been made\n\n1859] Dooitle & Bapreste 2007) Bapteste et a,|2009) edwards] 2009). Originally tee of fie was\nmanually built based on the understanding of the evolution history or the visual similarity of the\n\neneciec Tnday moadern techninqnec have heen annlied haced nn the gane camilartu"}, {"section_index": "3", "section_name": "Our contributions are two-fold", "section_text": "1) Provides a potential solution to the important problem of constructing a biology evolutionary tree.\nWe propose a novel approach in constructing a tree of life using the deep representation of CNNs\ntrained for image classification. We conjuncture that the hierarchical feature representation learned\nby deep networks can be leveraged to quantify the visual similarity of the species. In this way, we\nmight be able to construct tree of life using their feature similarity.\n2) Gives insight into the representations produced by deep neural networks.\nFor species not in the training categories that are visually similar to some species in the trainin;\ndataset, could we still utilize their deep representation in order to judge the relationship amon;\ndifferent species? We conjuncture that they show high cosine similarity of the activation vectors i\nhigh-level layers. By applying the inner product similarity of the activation vectors at the last full\nconnected layer for different species, we present empirical evidence that through transfer learnin;\nwe could roughly construct their tree of life.\nWe have two important criterions in mind while constructing our image dataset. 1) We would like\neach image category, which corresponds to a node in the tree (i.e. a species), to have enough sample:\nsuch that a statistic from the network activations is reasonably robust to noise. 2) There exists <\nground truth hierarchy on the image categories, so we can objectively evaluate the effectiveness o!\nour method.\nWe conjecture that if images of two training categories share some similar features, then their cross\nactivations in the softmax layer should be high. Hence we could evaluate the genetic distance of\nspecies within the training categories. Based on the deep representation of several typical CNNs,\nAlexNet 2012), VGG BOTA) and ResNet\nthat are trained for ImageNet classification, we construct tree of life for dozens of species in\nthe thousands of ImageNet categories of the training dataset.\nExperiments show that the proposed method using deep representation is very competitive to human\nbeings in building the tree of life based on the visual similarity of the species. We also try on net-\nworks at different epochs during the training, and the quality of tree of life increases over the course\nof training. The performance among the three networks, AlexNet, VGG and ResNet, improves with\nthe improvement of their classification quality.\nFortunately, the ImageNet 2012 Classification dataset provides the raw material we need. Thi:\ndataset contains 1000 categories of common life objects, and each category contains 1000 images\nas the training data. Also, those categories correspond exactly to nodes in the WordNet hierarchy\n) is a large lexical database of English, where words are grouped into sets\nof cognitive synonyms (synsets), each expressing a distinct concept and synsets are interlinked by\nmeans of conceptual-semantic and lexical relations.\nTo find a small branch of the phylogenetic tree in order to do the reconstruction, we choose a se\nA of genealogically close species (species close in the evolutionary tree of life as evaluated by th\n\nbranch distance) from the 1000 ImageNet categories. And for each category A \u20ac A, we use all th\n1000 images from the training dataset to get robust result.\nFor the ground truth, in the smallest WordNet subtree that contains A: 1) we could just consider the\ncategories in A and their positions in this WordNet subtree and build a smallest ground truth tree T. ve\n2) we could additional consider some categories outside A in this WordNet subtree. Then the ground\ntruth tree T\u2019 4 contains some categories outside the ImageNet training categories. Note that nodes\nin Th is basically the intersection of nodes in TA and nodes in the 1000 ImageNet categories. For\neach aia outside the 1000 training categories, we also use the 1000 images from the ImageNet\ndatabase"}, {"section_index": "4", "section_name": "2.2. SIMILARITY EVALUATION", "section_text": "We input all selected images for species in T\u2019, or T'% to a reference network and execute the feed\nforward pass. The feature maps (i.e. the activation vectors) of the last fully connected (FC) layer\nand the softmax layer are used to build the distance matrix.\n1) The Probability Method. For T'4, each class is in the training set and their ground truth labels\nare among the ones represented by the softmax layer. So we utilize the probability distribution of\nthe images at the softmax layer in order to build a distance matrix. Specifically, for two classes of\nimages A and B in the categories of A, we consider their cross activations in the softmax layer. For\neach image a \u20ac A, we obtain the predicted probability P,2, that this image belongs to node B, and\nwe calculate the average of these values, named Pyop.\n2) The Inner Product Method. For T?, as some species are not in the 1000 classification cate-\ngories, we use the centroid vector of the activations at the last fully connected (FC) layer for each\nspecies, and calculate the dot product of the two unitized centroid vectors to get their cosine simi-\nlarity. Then we add \u201c\u2014\u201d to assign lower value to closer species.\n\u2018The only exception is for Bassarisk which only contains 694 image:\nFor the reference network, we select three popular CNNs (AlexNet, VGG-16 and ResNet-152)\ntrained on ImageNet. The top 5 classification errors of AlexNet, VGG and ResNet are 15.3%,\n9.9% and 6.7% respectively. So they all learn the features of the images very well and we could\nleverage their deep representations for the ToL construction.\nPaop = Ss Pa2p\nacA\nFor each image b \u20ac B, we obtain the predicted probability P,2 4 that this image belongs to node A,\nand we calculate the average of these values, named Pg2,.\nPpa = Ss Pros\nbeB\nThe closer the genealogical relationship of A and B, the higher the cross predicted probability value\nshould be. As the cross confidence is close to zero, we use the logistic function to enlarge the value.\nThen we add \u201c\u2018\u2014\u201d to assign lower value to closer species and to keep the value nonnegative.\np,,-f9 ifA=B\n4\u00b0 * \"\\-109(0.5Pazp +0.5Pp24) if AA B\nVA+UB )\nDap = \u2014log (4 Teall\nBased on the distance matrix, we have three methods, namely \u201cApproximation Central Point\u201d, \u201cMit\nimum Spanning Tree\u201d, and \u201cMultidimensional Scaling\u201d, to construct a tree of life.\n2) The \u201cMinimum Spanning Tree\u201d (MST) based method. In the MST based method, we first\nconstruct a Minimum Spanning Tree (MST) based on the distance matrix. Then we build a tree\nfrom the root to the leaves, recursively split the current MST subtree into two parts by removing its\nlongest edge until there is only one node in each subtree. In this way we build a \u201ctree\u201d with all the\nleaves corresponding to the species and closest species are splitted in the end.\n3) The \u201cMultidimensional Scaling\u201d(MDS) based method. In the MDS based method, according\nto D, we know distances among the points which corresponds to the species. We first apply the\nMDS (Multi-Dimensional Scaling) algorithm to do dimension reduction\nand project the species points into a two dimensional subspace. Then we build a tree bottom up by\nrecursively merging two points with the smallest Euclidean distance in the two dimensional subspace\nand regard the midpoint of the two merging points as the new representative point.\nOur following experiments show that MST and MDS show similar performance but ACP is consid\nerably weaker.\nWe conduct a plenty set of experiments to build several branches of the phylogenetic trees of differ\nent granularity. To test whether our method could distinguish tiny visual differences, we first choos\u00ab\ngenealogically very close species, such as a set of fish species or a set of canine species, and con.\nstruct their tree of life. Then, to test whether our method has good scalability for larger species, suct\nas dog, cat, fish, etc., we choose 39 different large species to build a more general tree of life anc\nverify whether different breeds of one large species like dogs could be grouped together. In addi.\ntion, to evaluate the ability of constructing hierarchical trees based on the visual similarity of image:\noutside the Biology, we choose some vehicle categories from the ImageNet dataset (Russakovsk)\net al.|/2015) and build a vehicle tree.\nFor the methods, we use the probability method in Section |2.2)to build the distance matrix, an\napply ACP, MST, and MDS based methods to build the tree of life. For the inner product metho:\nin Section [2.2| the results is slightly weaker, but it can deal with species or categories outside th\ntraining set. For details of inner product method, the readers are referred to the Appendix."}, {"section_index": "5", "section_name": "3.1 CONSTRUCTING FINE-GRAINED TREE OF LIFE", "section_text": "To construct fine-grained tree of life, we select several fish species of high visual similarity and test\nwhether we could identify the tiny differences of the features. We pick six fish species from the\n\nImageNet training set and for each species, we input all the 1000 images in the training dataset to\nthe ResNet network.\nFigure[I]|shows that the tree of life constructed by MST and MDS coincides with the hierarchial tree\nbuilt on WordNet. The hierarchical tree constructed by ACP does not coincide with the ground truth\nat all. The reason may be that in any triangle ABC, the edge length from A to the median of BC\nsay D, is shorter than the average length of edge AB and AC. If A is far more from symmetric a:\nevaluated by edge BC, the recalculated distance of AD does not accurately represent the distance\nof A to the merged set of {B,C}.\nOur results demonstrate that deep CNNs could capture the local features as well as the global fea-\ntures simultaneously. As to rebuild tree of life for genealogically close species, we need both features\nof different granularity like the animal\u2019s size, skin texture and shape. For instance, the texture of a\n1) The \u201cApproximation Central Point\u201d(ACP) based method. In the ACP based method, we build\na tree bottom up by recursively merging two species points, say A and B, with the smallest distance,\nand setting the distance of the new point to other points as the average distance of A and B to other\n\npoints respectively.\nAs another example, we choose 11 very similar canine species and build a relatively lager tree, as\nillustrated in Figure[3] We can correctly build the canine tree, possibly according to their fur texture\nand shape features. The reconstructed quality is as good as what human beings could reconstruct\nbased on the visual similarity.\nTionfish tiger shark tiger shark tiger shark\n\ngreat white shark great white shark great white shark\n\nlionfish lionfish lionfish\ntiger shark\ngreat white shark\n\ngoldfish goldfish goldfish goldfish\n\nACP method MST method MDS method WordNet\nFigure 2: Constructed tree of life for families of species by different networks. Species of the five\nfamilies are in different colors. ResNet and VGG can correctly cluster the species but AlexNet does\nnot. Build by MST based method.\nFigure[2|shows the coarse-grained tree of life for clustering species of different families by different\nnetworks: ResNet, VGG and AlexNet. We pick 38 species from five families: bird, canine, plant,\nfish and feline.ResNet and VGG can correctly cluster the species by families, while AlexNet has\nmakes some mistakes. This result indicates that deep networks with higher classification quality\nlearn the deep representations better, such that the Tree of Life built based on the deep representation\nalso have different reconstruction quality.\nTo show that we not only correctly cluster the species, but also ensure the correct hierarchy within\neach family, we further construct a tree containing 20 species of five families, as illustrated in Figure\n\u2014\nFigure 1: Trees of life for fish species. The first three trees are constructed by our methods, and the\nfourth tree is the ground truth using WordNet. The hierarchy of MST and MDS coincides with that\nof the WordNet.\nDacNeat\n\nvon\n\nAlawNat\n\n@bira\n\ncanine\nplant\nfish\nfeline\nJapanese spaniel\nBorder collie\nShetland sheepdog\ncollie\n\nGreater Swiss Mountain dog\n\nGreat Dane\n\nRottweiler\nDoberman\n\u2014 od\nbriard\nschipperke\n\nGerman shepherd\nFigure 3: A constructed tree of life for 11\ncanine species. Closer species show shorter\ndistance. Build by MDS based method.\nmountain bike\ntendem bicycle\nGomme wines\nfire truck\n\nambulance\n\nconvertible\n\nee\n\noa\nFigure 5: A constructed vehicle tree. Our result looks more reasonable than that of the WordNet.\nBuild by the MDS method.\nTo show the ability of building hierarchical tree for other objects other than animals, we pick eight\nvehicle categories from the ImageNet training set. Vehicles are very different from the animals.\nbrambling\nhouse finch\n\njacamar\n\nlorikeet\n\nputferfish\n\ngoldfish\n\nsea lion\n\n\\adeKi\n\ndugong\nsturgeon\n\nShetland sheepdog\n\nmn\n\nJapanese spaniel\n\n7\n\ngreat dane\ntabby\n\nPersian cat\n\ncougar\n\na\nFigure 4: A constructed small tree of life for\ndifferent families of species. We not only\ncorrectly cluster each family of species, but\nalso present correct hierarchy of the species\nwithin each family. Build by MDS based\nmethod.\nTheir shapes are kind of fixed and they can only do certain motions like going forward or turnin\naround. Images of vehicles do not embed abundant features as the animal images do.\nNevertheless, our method still output good results, as shown in Figure[5] We cluster the ambulance,\nfire truck and garbage truck together, all of which have big carriages. While in WordNet, the ambu-\nlance is close to model T, convertible and cab, but the three do not have carriage and they are much\nsmaller than ambulance. Our result is more reasonable than the WordNet provides."}, {"section_index": "6", "section_name": "4 CONCLUSION", "section_text": "By leveraging the similarity of features extracted automatically by deep learning techniques, wi\nbuild a tree of life for various biological species, either belonging to the training categories or not\nThe results are highly competitive to the level of human beings in building the tree of life based o1\nthe visual similarity of the images. Our work provides new understandings to the deep representatiot\nof neural networks and sheds light on possible novel applications of deep learning in the area o\nBioinformatics. An intriguing future work would be on how to utilize deep learning techniques t\nbuild a more delicate tree of life based on the gene similarity of the species."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "This research work was supported by US Army Research Office(W911NF-14-1-0477) and Nationa\nScience Foundation of China(61472147).\nPedro Ballester and Ricardo Matsumura de Aratijo. On the performance of googlenet and alexnet\napplied to sketches. In AAAI, pp. 1124-1128, 2016.\nEric Bapteste, Maureen A O\u2019Malley, Robert G Beiko, Marc Ereshefsky, J Peter Gogarten, Laura\nFranklin-Hall, Francois-Joseph Lapointe, John Dupr\u00e9, Tal Dagan, Yan Boucher, et al. Prokaryotic\nevolution and the tree of life are two different things. Biology direct, 4(1):1, 2009.\nIngwer Borg and Patrick JF Groenen. Modern multidimensional scaling: Theory and applications\nSpringer Science & Business Media, 2005.\nDavide Chicco, Peter J. Sadowski, and Pierre Baldi. Deep autoencoder neural networks for gene\nontology annotation predictions. In 5th ACM Conference on Bioinformatics, Computational Bi-\nology, and Health Informatics, BCB, pp. 533-540, 2014.\nDan C. Cirean, Alessandro Giusti, Luca M. Gambardella, and Jrgen Schmidhuber. Mitosis detection\nin breast cancer histology images using deep neural networks. In MICCAI, pp. 411-418, 2013.\nDan C. Ciresan, Ueli Meier, Jonathan Masci, and Jiirgen Schmidhuber. Multi-column deep neural\nnetwork for traffic sign classification. Neural Networks, 32:333\u2014338, 2012.\nCharles Darwin. On the origin of species by means of natural selection. Nature, pp. 502, 1859\nRonen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In COLT, pp\n907-940, 2016.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis using convolutiona\nneural networks. In NJPS, pp. 262-270, May 2015b.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog\nnition. CoRR, abs/1512.03385, 2015.\nLaura A. Hug, Brett J.Baker, Karthik Anantharaman, Christopher T. Brown, Alexander J. Probst,\nand et al. A new view of the tree of life : Nature microbiology. Nature Microbiology, pp. 16048,\n2016.\nDong Yu Li Deng. Deep learning: Methods and applications. Technical report, May 2014.\nHasim Sak, Andrew Senior, Kanishka Rao, Francoise Beaufays, and Johan Schalkwyk. Google\nvoice search: faster and more accurate. September 2015. URL https://research.\nigoogleblog.com/2015/09/google-voice-search-faster-and-more.html\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image\nrecognition. CoRR, abs/1409.1556, 2014.\nRichard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. Parsing with composi-\ntional vector grammars. In ACL, pp. 455-465, 2013.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo-\nlutional neural networks. In NJPS, pp. 1097-1105, 2012.\nGeorge A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):\n390_41. 19905.\nBharath Ramsundar, Steven M. Kearnes, Patrick Riley, Dale Webster, David E. Konerding, and\nVijay S. Pande. Massively multitask networks for drug discovery. CoRR, abs/1502.02072, 2015."}, {"section_index": "8", "section_name": "APPENDIX", "section_text": "To test the inner product method in Section[2.2] that can build tree of the species not in the training\nset, we select 5 species not in the training set and 14 species in the training set. We choose 100(\nimages for each species except for Bassarisk which only contains 694 images. We show the result:\non ResNet using the MDS based method. Figure 6Jillustrates the result.\nFigure 6: Constructing tree of life containing some species not in training set (marked by pink point).\nWe use inner product method to build the distance matrix. Only coati is in the wrong leaf of the tree.\n\u2018Siamese cat\n\ntiger cat\n\nEgyptian cat\n\nPersian cat\n\nAfrican elephant\n\nIndian elephant\n\nAlaskan brown bear\n\n. grizzly\n\nlesser panda\n\nMDS method\n\nkit fox\n\nleopard\nSie\n\nsnow leopard\n\ntiger cat\n\nSiamese cat\nPersian cat\nEgyptian cat\ngiant panda\n\n=\n\nlesser panda\n\ngrizzly\n\nWordNet"}]
HJGODLqgx
[{"section_index": "0", "section_name": "RECURRENT HIDDEN SEMI-MARKOV MODEL", "section_text": "Hanjun Dai!, Bo Dai!, Yan-Ming Zhang\u2019, Shuang Li!, Le Song!\nSegmentation and labeling of high dimensional time series data has wide appli\ncations in behavior understanding and medical diagnosis. Due to the difficulty\nof obtaining a large amount the label information, realizing this objective in an\nunsupervised way is highly desirable. Hidden Semi-Markov Model (HSMM) is a\nclassical tool for this problem. However, existing HSMM and its variants typically\nmake strong generative assumptions on the observations within each segment, thus\ntheir ability to capture the nonlinear and complex dynamics within each segment is\nlimited. To address this limitation, we propose to incorporate the Recurrent Neural!\nNetwork (RNN) as the generative process of each segment, resulting the Recurrent\nHSMM (R-HSMM). To accelerate the inference while preserving accuracy, we\ndesigned a structure encoding function to mimic the exact inference. By gener\nalizing the penalty method to distribution space, we are able to train the model\nand the encoding function simultaneously. We also demonstrate that the R-HSMM\nsignificantly outperforms the previous state-of-the-art on both the synthetic and\nreal-world datasets."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Segmentation and labeling of time series data is an important problem in machine learning anc\nsignal processing. Given a sequence of observations {x1,72,...,a7}, we want to divide the 7\nobservations into several segments and label each segment simultaneously, where each segmen\nconsists of consecutive observations. The supervised sequence segmentation or labeling technique:\nhave been well studied in recent decades (Sutskever et al.| 2014} Kong et al.| 2015} Chen et al.\n(2015). However, for complicated signals, like human activity sensor data, accurately annotating the\nsegmentation boundary or the activity type would be prohibitive. Therefore, it is urgent to develo\nunsupervised algorithms that can jointly learn segmentation and labeling information directly from\nthe data without supervisions. Figure[I|provides an illustration which we are focus on.\nThe Hidden Semi-Markov Model (HSMM) is a powerful model for such task. ]\neliminates the implicit geometric duration distribution assumptions in HMM (Yu} [2010), thus allow\nthe state to transit in a non-Markovian way. Most of the HSMM variants make strong parametri\nassumptions on the observation model (Rabiner|/1989}|Johnson & Willsky||2013 [Yu] 2010). Thi\nmakes the learning and inference simple, but ignores the nonlinear and long-range dependency withi\na segment. Take the human activity signals as an example. The movements a person performs at\ncertain time step would rely heavily on the previous movements, like the interleaving actions of le!\nhand and right hand in swimming, or more complicated dependency like shooting after jumping i\nplaying basketball. Some models have been proposed to tackle this problem Ghahramani & Hintot\n70001 [Fox et al.|/2009| [Linderman et al.|{2016). but are limited in linear case.\nSince people have justified RNN\u2019s ability in modeling nonlinear and complicated dependen\ncies (Sutskever et al. 2014} Du et al.| 2016), we introduce the recurrent neural emission model into\nHSMM for capturing various dependencies within each segment to address such issue. However, the\nflexibility of recurrent neural model comes with prices: it makes the exact Expectation-Maximizatior\n(EM) algorithm computationally too expensive.\nTo speed up the learning and inference, we exploit the variational encoder (VAE) framework\n& Welling) (2013). Specifically, we propose to use bidirectional RNN (bi-RNN) encoder. Such"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Eee ee ee i\nEee ee ee\nBERRET WEIR Oh ERE ll\nee Te Hn\nSANTEE ETT TT) 02 TTT 0\n\n(a) Sine (b) Gaussian Process\nFigure 1: Synthetic experiment results. Different background colors represent the segmentations\nFigure 1: Synthetic experiment results. Different background colors represent the segmentations\nwith different labels. In the top row, the black curve shows the raw signal. (a) The Sine data set is\ngenerated by a HSMM with 3 hidden states, where each one has a corresponding sine function; (b)\nSimilar to[la] but the segments are generated from Gaussian processes with different kernel functions.\nThe first two rows are our algorithms which almost exact locate every segment.\narchitecture will mimic the forward-backward algorithm, and hence is expected to capture simila\ninformation as in exact posterior calculation.\n[t should be emphasized that due to the discrete nature of the latent variables in our model, the\nalgorithm proposed in| (2013) and its extension on time-series models Gao et al.\n2016} Krishnan et al.| 2015) are not directly applicable. There are plenty of work proposed based\non stochastic neuron (Tang & Salakhutdinov) |2013}|Bengio et al. 2013} Mnih & Gregor} 2014\nRaiko et al.| [2014} (2016) to remedy such issue. However, none o!\nthese off-the-shelf methods are easy to achieve good performance according to our experiment: the\nhundreds or thousands layers of stochastic neuron (which is equal to the length of sequence), together\nwith the switching generative RNN, make the encoding function very sensitive, and thus, extremely\ndifficult to train fully on unsupervised setting. We propose a solution, stochastic distributional penalty\nmethod, which introduces auxiliary distributions to separate the decoding R-HSMM and encoding\nbi-RNN in training procedure, and thus, reduces the learning difficulty for each component. This\nnovel algorithm is general enough and can be applied to other VAE with discrete latent variables\nwhich can be of independent interest. We emphasize that the proposed algorithm is maximizing\nexact the nagative Helmholtz variational free energy. It is different from |Johnson et al.|(2016) ir\nwhich a lower bound of the variational free energy is proposed as the surrogate to be maximized fot\nconvenience.\nWe experimentally justified our algorithm on the synthetic datasets and three real-world datasets\nnamely the segmentation tasks for human activity, fruit fly behavior and heart sound records. The\nR-HSMM with Viterbi exact inference significantly outperforms basic HSMM and its variants\ndemonstrating the generative model is indeed flexible. Moreover, the trained bi-RNN encode1\nalso achieve similar state-of-the-art performances to the exact inference, but with 400 times faster\n\ninference speed, showing the proposed structured encoding function is able to mimic the exact\ninference efficiently."}, {"section_index": "3", "section_name": "2 MODEL ARCHITECTURE", "section_text": "Given a sequence # = [x1,%2,..., L |e], where x, \u20ac R\"\u2122 is an m dimensional observation at time t.\nour goal is to divide the sequence into meaningful segments. Thus, each observation x; will have\nthe corresponding label z, \u20ac Z, where Z = {1,2,..., K} is a finite discrete label set and K is\npredefined. The label sequence z = [21, z2,...,Zjq|| should have the same length of x.\nBesides labels, HSMM will associate each position t with additional variable d, \u20ac D = {1,2,...,D}\nwhere d; is known as duration variable and D is the maximum possible duration. The duration variabl\ncan control the number of steps the current hidden state will remain. We use d to denote the duratior\nsequence. We also use notation x;,.;, to denote the substring [x;,,74,41,---,t,| of @. Withou\nambiguity, we use z as a segment label, and d as the duration.\nIn this paper, we focus on one of the variants of HSMM, namely the explicit duration HMM\n\n(EDHMM) [1989), and use Decreasing Count Variables (Chiappa\\|2014) for the notation.\nExplicit Duration Hidden Markov Model. Similar to HMM, this model treats the pair of (z, d) a\n\u2018macro hidden state\u2019. The probability of initial macro state is defined as P(z,d) = P(z)P(d|z). We\n\nuse the notation 7, & P(z) and P(d|z) & B,,q to parametrize the initial probability and duratior\n\nprobability, respectively. Aj; \u00a9 P(x, = ilz:-1 = j,d:-1 = 1) is the state transition probability\non the segment boundary. Here 7 \u20ac R* is in K-dimensional simplex. For each hidden state z, the\n\ncorresponding rows B, . and A... are also in probability simplex. Here we assume the multinomia\ndistribution for P(d|z).\nIn EDHMM, the transition probability of macro hidden state P(z:, d:|z:-1, di_1) is decomposed b:\nP(z:)z+\u20141. d;_1)P(d;|z;. d}_1) and thus can be defined as:\n|x|\n\nL(w) = log S72, Baas [] Pla dea) Pdi dea) P(alz,d\nzd t=2\nhy = (Way + VE) hyp y + 00),\nIs\n\nFinally, in this model, P(#|z,d) = []j=, P(#s,:s,+4,,-1|%s;,ds,) is computed by the product of\ngenerative probabilities for each segment. In Eq.[4| W \u20ac R\u2122*? is a weight matrix capturing the last\nobservation x,_;, and V \u20ac R\u2019*\" is for the propagation of history h,_1. The b is a bias term. The\nsuperscript z;, indexes the RNN we used for the corresponding segment. The segments with different\nlabels are generated using different RNNs. So we should maintain A RNNs. o(-) is a nonlinear\nactivation function. We use tanh in our experiments.\nAnis ifdi=1, Bay ifdj1=1\n\n: > P(di|z1,dy-1) = .\nWeen) ifdi1>1 (dele, dea) th ifd1>1\nee (1\\\n\nP(2t\\2-1,dt-1) = {\nRecurrent Hidden Semi-Markov Model. For the simplicity of explanation, we focus our algorithm\non the single sequence first. It is straightforward to apply the algorithm for dataset that has multiple\nsequences. Given the parameters {7, A, B}, the log-likelihood of a single observation sequence x\ncan be written as below.\nsitd.,\u20141 sitds,\u20141\nP(2s,:s;+ds,-11%s:>4s;) = Il P(a4\\@s,-15 2s;) = Il P(ai\\he, 2s;)\ntos: t=s.\nAt the time step t, we assume a diagonal multivariate gaussian distribution over the conditiona\nlikelihood, where the mean and covariance matrix are the output of RNN, i.e.,\nP(xp|ht, 2,) ~ Nay p= Oh, + 0), D = Diag(exp(UE h, + 0E\"\u201d)))\nThe above formulation indicates that the generative model P(x;|h:, zs,) depends not only on the\nlast step observation x,_,, but also the last hidden state h,_,, which is together captured in Eq.\n\nIn summary, we denote all the parameters in the proposed R-HSMM as 6 = {7, A, B, Ornn}. The\ncorresponding graphical model is shown in Figure [2b]\nTo obtain the posterior or MAP in the proposed R-HSMM, the classical forward-backward algorithm\nor Viterbi algorithm needs to solve one dynamic programming per sample, which makes the inference\ncostly, especially for the long sequence with thousands of timestamps. So instead, we treat the\nBayesian inference from optimization perspective, and obtain the posterior by maximizing the\n\nnegative Helmholtz variational free energy (Williams||!1980}{Zellner|{1988} [Dai et al.|/2016),\nover the space of all valid densities P. To make the optimization (6) tractable, the variationa\nautoencoder restricts the feasible sets to be some parametrized density Q,,, which can be execute:\nefficiently comparing to the forward-backward algorithm or Viterbi algorithm. However, suc!\nrestriction will introduce extra approximation error. To reduce the approximation error, we use |\nstructured model, i.e., bidirectional RNN, to mimic the dynamic programming in forward-backwar\nalgorithm. Specifically, in the forward-backward algorithm, the forward message a,(z:,d,) ani\nbackward message ,(z,,d,) can be computed recursively, and marginal posterior at position\ndepends on both a;(z, d;) and 6;(z,,d,). Similarly, in bi-RNN we embed the posterior messag\nwith RNN\u2019s latent vector, and marginal posterior is obtained from the latent vectors of two RNN\nat the same time step t. Let) = {iang?s Yann; We \u00a9 R\u2019*, Wa \u20ac R'*\u201d} be the parameters 0\n\nbi-RNN encoder, the Q,,), is decomposed as:\n|x|\n\nQu(z, dla) = Q(zr|has b)Q (dialer, as) TT (zee a, res BQ (del ze, de\u20141, ast\n\nt=2\nIt should be emphasized that due to the discrete nature of latent variables in our model, the algorithrr\nproposed in|Kingma & W ) is not directly applicable, and its extension with stochastic\n\nelling| (2013\nneuron reparametrization (Bengio et al.| 2013} Raiko et al. 2014} Gu et al.|/2015}/Chung et al.| 2016\ncannot provide satisfied results for our model according to our experiments. Therefore, we extend the\n\npenalty method to distribution space to solve optimization (9p.\nox ep \u00a38(x) = Eq(z,a\\z) [log Pa(a, 2, d) \u2014 log Q(z, dla)],\nwhere hy = [RNNj (21:2) t ), RNNo (a; |e|)] is computed by bi-RNN. We use multinomial distributions\nQ(ze\\hes b) = M(softmax(W,\" he)) and Q(di|zt, he; v) = M(softmax(W,] h,)). The dependency\n\nover d;_1 ensures that the generated segmentation (z, d) is vali\n\nwe sampled duration d;_; > 1 from Q, at time \u00a2 \u2014 1, then d;\n\nexperiment, we use LSTM\n\nHochreiter & Schmidhuber] {1997\n\nid according to Eq_[I] For example, if\nand z; should be deterministic. In our\n\nas the recursive units in bi-RNN.\nSince with any fixed @, the negative Helmholtz variational free energy is indeed the /ower bound of\nthe marginal likelihood, i.e.,\nmax \u2014\u2014\n8,0)\n\nFL eee)\nAlgorithm 1 Learning sequential VAE with stochastic distributional penalty method\n1: Input: sequences {a'\") }\u00a5_,\n\n2: Randomly initialize W) and 6\u00b0 = {n\u00b0, A\u00b0, B\u00b0, 0\u00b0, }\n3: for \\=0,...,00do\n\n4: for t = 0 to T do\n\n5: Sample {a\u2018\")}\"\u201c_, uniformly from dataset with mini- Patel ve M.\n\n6: Get {2),d\u2122 1\", with 6\u00a2 by dynamic programming in\n\n7: Update 7'+!, am, \u2018Bet using rule (16]\n\n8: Orhn = Prams ~YM if Tact Ver, \u00a3a( 6, va)\n\n9: yttt = yt \u2014 mT ea Vur\u00a3y(0,y\\2\u2122) \u2014 > bi-mnn sequence to sequence learning\n10: end for\n\n11: end for\nAs we discussed, learning the sequential VAE with stochastic neuron reparametrization in unst\npervised setting is extremely difficult, and none the off-the-shelf techniques can provide satisfie\nresults. In this section, we introduce auxiliary distribution into \u00ae) and generalize the penalt\nmethod |Bertsekas](1999) to distribution space.\nmax\n\nOb {Q(z dla\u201d) Fy\n\nS.t.\nN\n1 ~\n_ max \u2014 y Ly(6, pla),\n6. {Q(zd\\a\u2122)}x_, N A\nEx(0, var) = Bos ajey [log Pa(@, z,d) \u2014 log Q(z, d\\a;)| \u2014 KL (Q(z, d\\a)||Qu(z.d\nVoF\u00a3 = log Po(x, z,d) + Alog Q(z, dlx) \u2014 (1 + A) log Q(z, d|x) =\nSpecifically, we first introduce an auxiliary distribution Q(z, d\\x) for each and reformulate the\noptimization (9) as\nWe enforce the introduced Q(z, dla) equals to Qy(z, dla) in term of K L-divergence, so that the\noptimization problems 2) and (10) are equivalent. Because of the non-negativity of K L-divergence,\nitself can be viewed as the penalty function, we arrive the alternative formulation of {10) as\nIn fact, because we are using the stochastic gradient for updating @ and 7 later, Q*(z, d\\x) is never\nexplicitly computed and only samples from it are required. Recall the fact that Q,,(z, dja) has a nice\ndecomposition|7] we can multiply its factors into each recursion step and still get the same complexity\nas original Viterbi algorithm for MAP or sampling. Specifically, let\u2019s define a;(j, 7) to be the best\njoint log probability of prefix x,., and its corresponding segmentation which has the last segment\nwith label 7 and duration r, i.e.,\nWithout considering the complexity of computing emission probabilities, the dynamic programming\nneeds time complexity O (|a|K? + |w|KD) (Yu & Kobayashi\\|2003) and O(|a|.\u2019) memory. We\n\nexplain the details of optimizing the time and memory requirements in Appendix|A]\nmax 1 tog Pate 2), 2) d\u2122) + dog Qy (2, d\u2122 |a\u2122)\n\nn=1\nCm) oy\n\nM |s| \\s| sitds\n1 n)ip(n) (ni\nmax +> y (torn + y log Alon) om + y Bie go + y log P(\u00ab Ins ) \u00a3Bran))\n\nn=1 i=2 J=8i\nM 7 (n) _; (m)) 4\nTi vent a = i) Aig am 1 viet I~ I(zs (\u00bb) =iand 20M = =j)\n. Oi sO = M\nan an 1(dy? = rand 2h =j)\n\nean js) |\n\nBir\nSince we already have the segmentation solution, the total number of samples used for training is\nequal to the number of observations in dataset. The different RNNs use different parameters, anc\ntrain on different parts of observations. This makes it easy for parallelized training.\nwi(j,7) = max log Q(21tsdrtl@ie)s Ste 2 = jdy = dey = 1 dying =P\noeuthr 1) + Thx log(z Pie P(wilatrpit\u20141s2 = J)) r>1,t>1\n\\ p(dt-rt1=r|2=j,@)\ny is log ieltecnicrleaie)\n\nmaxjez\\j Maxyep Ar\u20141(4,7\u2019) + gtx log(AijByaP(ailz =9)) r=1Lt>1\n+745 log Qu (Zt-r4i = J, dir = 7/2);\n\nhy log Qy (a1 = 9,1 = r\\a) 4 a log(7jBjiP(ai\\z =3)); r=1,t=1\n\n0. otherwise\nRemark: When \\ = ov, the Q(z, dla) will be exactly Qy(z, d|a) and the algorithm will reduce to\ndirectly working on Q,,(z, d|a) without the effect from Ps (x, z,d). Therefore, it is equivalent to\nobtaining MAP or sampling of the latent variables z,d from Q,,(z, d|a), whose cost is O(|a#|K)\nIn practical, to further accelerate the computation, we can follow such strategy to generate samples\nwhen A is already large enough, and thus, the effect of P(x, z, d) is negligible.\nWith the fixed Q(z, dz), we can update the @ and w by exploiting stochastic gradient descent\nalgorithm to avoid scanning the whole training set. Sample a mini-batch of sequences {a2\"}\"\u201c, with\nsize M < N, we proceed to update {6, y)} by optimizing the Monte Carlo approximation of i.\nUpdate #: Finding parameters to maximize the likelihood needs to solve the constrained optimization\nshown helow\nwhere {7, A, B} are constrained to be valid probability distribution. We use stochastic gradient\ndescent to update 0,,,, in totally K RNNs. For parameters 7, A, B which are restricted to simplex,\nthe stochastic gradient update will involve extra projection step. To avoid such operation which may\nbe costly, we propose the closed-form update rule derived by Lagrangian,\nRemark: We can get multiple samples {z, d} for each x from Q(z, dla) to reduce the variance in\nstochastic gradient. In our algorithm, the samples of latent variable come naturally from the auxiliary\ndistributions (which are integrated with penalty method), rather than the derivation from lower bound\n\nof objective (Tang & Salakhutdinov| 2013}/Raiko et al. 2014} Mnih & Rezende| 2016)."}, {"section_index": "4", "section_name": "5 EXPERIMENTS", "section_text": "Baselines We compare with classical HSMM and two popular HSMM variants. The first one\nis Hierarchical Dirichlet-Process HSMM (HDP-HSMM) (Johnson & Willsky| (2013), which is the\nnonparametric Bayesian extension to the traditional HSMM that allows infinite number of hidder\nstates; the second one is called subHSMM (Johnson & Willsky\\ |2014), which uses infinite HMM a:\nthe emission model for each segment. This model also has two-level of latent structure. It consider:\nthe dependency within each segment, which is a stronger algorithm than HDP-HSMM. We alsc\ncompare with the CRF autoencoder (CRF-AE) (Ammar et al.||2014), which uses markovian CRF a:\nrecognition model and conditional i.i.d.model for reconstruction. Comparing to HSMM, this mode:\nignores the segmentation structures in modeling and is more similar to HMM.\nEvaluation Metric We evaluate the performance of each method via the labeling accuracy. Specifi\ncally, we compare the labels of each single observations in each testing sequence. Since the labels are\n\nunknown during training, we use KM algorithm to find the best mapping between\npredicted labels and ground-truth labels.\nSettings Without explicitly mentioned, we use leave-one-sequence-out protocol to evaluate the\nmethods. Each time we test on one held-out sequence, and train on other sequences. We report the\nmean accuracy in Table|I] We set the truncation of max possible duration D to be 400 for all tasks\nWe also set the number of hidden states K to be the same as ground truth.\nFor CRF-AE, we extend the origin model for the continuous observations, and learn all parameter:\nsimilar to|M. Schmidt}(2008). We use mixture of Gaussians to model the emission, where the number\nof mixtures is tuned in {1,...,10}.\nFor the proposed R-HSMM, we use Adam (Kingma & Baj\\|2014) to train the KC generative RNN\n\nand bi-RNN encoder. To make the learning tractable for long sequences, we use back propagation\nthrough time (BPTT) with limited budget. We also tune the dimension of hidden vector in RNN\nthe L-regularization weights and the stepsize. We implemented with CUDA that parallelized for\ndifferent RNNs, and conduct experiments on K-20 enabled cluster. We include both the RRHSMM\nwith the exact MAP via dynamic programming (r'HSMM-dp) and sequential VAE with forward pass\n(tHSMM-fw) in experiments. In all tasks, the rHSMM-fw achieves almost the same performance\nto rHSMM-dp, but 400 times faster, showing the bi-RNN is able to mimic the forward-backward\nalgorithm very well with efficient computation.\nSynthetic Experiments We first evaluate the proposed method on two 1D synthetic sequential data\nsets. The first data set is generated by a HSMM with 3 hidden states, where 7, A, B are designed\nbeforehand. A segment with hidden state z is a sine function A, sin(w,a + \u20ac,) + \u20ac2, where \u20ac; and \u20ac2\nare Gaussian random noises. Different hidden states use different scale parameters \\, and frequency\nparameters w,. The second data set also has 3 hidden states, where the segment with hidden state z is\nsampled from a Gaussian process (GP) with kernel function k, (x, y). Different hidden states employ\ndifferent kernel functions. The specific kernel functions used here are k(x, y) = exp{\u2014min(| zy |\n[a +yl|)?/10}, ko(x, y) = exp{\u2014(x \u2014 y)?/10} and k(x, y) = (5 \u2014 la \u2014y|)I{(5 \u2014 la \u2014 yl) < 5}\nFor both of the Sine and GP data sets, the duration of a segment is randomly sampled from a\ndistribution defined on {1, ..., 100}, which depends on the hidden states. Thus, the segmentation task\ncorresponds to finding out different functions embedded in the sequences.\nUpdate ~: Given fixed \\, log Qu(z\u2122, d\u2018\u201d) a) is essentially the sequence to sequence likelihood,\nwhere the input sequence is x and output sequence is {z, d}. Using the form of Q,, in Eq|7| this\nlikelihood can be decomposed by positions. Thus we can conveniently train a bi-RNN which\nmaximize the condition likelihood of latent variables by stochastic gradient descent.\nFor the HDP-HSMM and subHSMM, the observation distributions are initialized as standard Mul-\ntivariate Gaussian distributions. The duration is modeled by the Poisson distribution. We tune the\nconcentration parameters a, 7 \u20ac {0.1, 1, 3,6, 10}. The hyperparameters are learned automatically.\nFor subHSMM, we tune the truncation threshold of the second level infinite HMM from {2...15}.\nTi\nmi\n\nmi\n|\n\nUL\n\n|\n|\n|\n1 MM\n\n1) AL MUL\nI\n\n\u2014\n\nTE Hes groom EI\n\n(a) Human activity\n\n(b) Drosophila\n\nz=\n=\n\n=\n\n7 ning EERE Turing lace\nerr} oy\nFigure 3: Segmentation results on Human activity and Drosophila datasets. Different background\ncolors represent the segmentations with different labels. In the top row, the black cure shows the\nsignal sequence projected to the first principle component. The following two rows are our algorithms\nwhich almost exact locate every segment. (a) The Human activity data set contains 12 hidden states.\neach of which corresponds to a human action; (b) The Drosophila data set contains 11 hidden states\neach of which corresponds to a drosophila action.\nTable 1: Error rate of segmentation. We report the mean and standard deviation of error rate.\nMethods SINE GP HAPT Drosophila Heart PN-Full\ntHSMM-dp 2.67 \u00a31.13% 12.46+42.79% 16.38+45.03% 36.2141.37% 33.1447.87% 31.95 + 4.32%\ntHSMM-fw 4.02 + 1.37% 13.1342.89% 17.74+7.64% 35.79+0.51% 33.3648.10% 32.344 3.97%\n\nHSMM 41.85 42.38% 41.1541.99% 41.5948.58% 47.3740.27% 50.624+4.20% 45.04 + 1.87%\nsubHSMM 18.144 2.63% 24.8144.63% 22.18 4445% 39.7042.21% 46.6744.22% 43.0142.35%\nHDP-HSMM 42.74 + 2.73% 41.904 1.58% 354646.19% 43.59+41.58% 47.5644.31% 42.58 + 1.54%\n\nCRF-AE\n\n44.87 + 1.63%\n\n51.43 + 2.14%\n\n49.26 + 10.63%\n\n57.62 + 0.22%\n\n53.16 \u00a34.78%\n\n45.73 + 0.66%\nWe visualize the segmentation results of ground truth and three competitors on Sine and GP dati\nsets in Figure [Ta] and Figure [Ib]respectively, and report the numerical results in Table [I] As we\ncan see, R-HSMM provides much better results on even small segments, dramatically outperform\nHSMM variants and CRF-AE. Also note that, the sine function depicts short term dependencies, whil\nGaussian process has long dependency that determined by the kernel bandwidth. This demonstrate:\nthe ability of R-HSMM in capturing the long or short term dependencies.\nFigure{3a|shows the ground truth and the segmentation results of all methods. Both rHSMM-dp and\ntHSMM-fw almost perfectly recover the true segmentation. They can also capture the transition\nactivity types, e.g., stand to lie or sit to lie. The HSMM, HDP-HSMM and CRF-AE makes some\nfragmental but periodical segmentations for walking, caused by lacking the dependency modeling\nwithin a segment. The subHSMM also has similar problem, possibly due to the limited ability of\nHMM generative model.\nDrosophila Here we study the behavior patterns of drosophilas. The data was collected by {Kain\nwith two dyes, two cameras and some optics to track each leg of a spontaneously\nbehaving fruit fly. The dimension of observation in each timestamp is 45, which consists of the raw\nfeatures and some higher order features. See Figure[3b]for the detail of the 11 behavior types. We\nperform leave-one-sequence-out experiment on 10 sequences of length 10000 each. Figure[3b]shows\nthe segmentation results on the prefix of one sequence, while Table[I] gives the mean accuracy on all\nsequences. Different from the previous experiment, where the human activity signals are relatively\nHuman activity This dataset which is collected by [Reyes-Ortiz et al.|( ) consists of signals\ncollected from waist-mounted smartphone with accelerometers and gyroscopes. Each of the volun-\nteers is asked to perform a protocol of activities composed of 12 activities (see Figure [3a] for the\ndetails). Since the signals within an activity type exhibit high correlation, it is natural for RNN to\nmodel this dependency. We use these 61 sequences, where each sequence has length around 3000.\nEach observation is a 6 dimensional vector, consists of triaxial measures from accelerometers and\n\ngyroscopes.\n(9) Reconstruction il\n\n\u2018600\n\nMy\n\n\u2018200\n\n1200 +400\n\nustration on Sine dataset.\n\non\n\nsonstructed\n\nRec\n\n)Y Reconstruction illustrati\nFigure 4: Reconstruction illustration. The generative RNNs (decoders) are asked to reconstruct th\nsignals from only the discrete labels and durations (which are generated from encoder).\nsmooth, here the signals depict high variance. Different activities exhibit quite different duratior\nand patterns. Also, the activity types changes frequently. The R-HSMM almost captured each\nchanging point of activities with both long and short durations. The corresponding mean accuracy\nalso outperforms the baselines. However, we observed there are some correct segmentations with\nwrong labels. This happens mostly to the short segments, in which the RNN doesn\u2019t have enough\nhistory established for distinguishing similar activity types.\nPhysionet The heart sound records, usually represented graphically by phonocardiogram (PCG)\nare key resources for pathology classification of patients. We collect data from PhysioNet Challenge\n2016 (Springer et al.|/2015p, where each observation has been labeled with one of the four states\nnamely Diastole, S1, Systole and $2. We experiment with both the raw signals and the signals afte:\nfeature extraction. Regarding the raw signals (Heart dataset), we collect 7 1-dimensional sequence:\nof length around 40000. The feature-rich dataset (PN-Full) contains 2750 sequences, where each o:\nthem consists of 1500 4-dimensional observations. We do 5-fold cross validation for PN-Full. The\nvisualization of segmentation results are shown in Appendix[B. As the results shown in Table[i\nour algorithm still outperforms the baselines significantly. Also for such long raw signal sequences\nthe speed advantage of bi-RNN encoder over Viterbi is more significant. Viterbi takes 8min to do one\ninference, while bi-RNN only takes several seconds. Our framework is also flexible to incorporate\nprior knowledge, like the regularity of heart state transition into HSMM."}, {"section_index": "5", "section_name": "5.2 RECONSTRUCTION", "section_text": "From Fig.[4]we can see the generative RNN correctly captures different characteristics from signal:\nof different segment labels, such as different frequencies and scales in Sine dataset, or the differen\nvariance patterns in GP dataset. This is essential to distinguish between different segments.\nWe presented the R-HSMM, a generalization of HSMM by incorporating recurrent neural generative\nmodel as the emission probability. To eliminate the difficulty caused by such flexible and powerful\nmodel in inference, we introduced the bi-RNN as the encoding distribution via the variational\nautoencoder framework to mimic the forward-backward algorithm. To deal with the difficulty of\ntraining VAE containing discrete latent variables, we proposed a novel stochastic distributional penalty\nmethod. We justified the modeling power of the proposed R-HSMM via segmentation accuracy and\nreconstruction visualization. From the comprehensive comparison, the proposed model significantly\noutperforms the existing models. It should be emphasized that the structured bi-RNN encoder yields\nsimilar performance as the exact MAP inference, while being 400 times faster. Future work includes\nfurther speeding up of our algorithm, as well as generalizing our learning algorithm to other discrete\nvariational autoencoder.\nIn this section, we examine the ability of learned generative model by visualizing the reconstructed\nsignals. Given a sequence a, we use recognition model to get the latent variables z and d, then use\nlearned KX generative RNNs to generate signals within each segment. For the ease of visualization,\nwe show the results on 1D signal dataset in Fig. /4a]and Fig. 4b]\nThis project was supported in part by NSF IIS-1218749, NIH BIGDATA 1R01GM108341, NSF\nCAREER IIS-1350983, NSF IIS-1639792 EAGER, ONR N00014-15-1-2340, Nvidia and Intel."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Yoshua Bengio, Nicholas L\u00e9onard, and Aaron Courville. Estimating or propagating gradients through\nstochastic neurons for conditional computation. arXiv preprint arXiv: 1308.3432, 2013.\nD. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, second edition, 1999.\nJunyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks\narXiv preprint arXiv: 1609.01704. 2016.\nNan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song\nRecurrent marked temporal point processes: Embedding event history to vector. In KDD, 2016.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 9(8)\n1735-1780. 1997.\nMatthew J Johnson and Alan S Willsky. Bayesian nonparametric hidden semi-markov models. Th:\nJournal of Machine Learning Research, 14(1):673\u2014701. 2013.\nJamey Kain, Chris Stokes, Quentin Gaudry, Xiangzhi Song, James Foley, Rachel Wilson, an\nBenjamin de Bivort. Leg-tracking and automated behavioural classification in drosophila. Natur\ncommunications, 4:1910, 2013.\nShixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagation\nfor stochastic neural networks. arXiv preprint arXiv:1511.05176, 2015.\nLingpeng Kong, Chris Dyer, and Noah A Smith. Segmental recurrent neural networks. arXiv preprin\narXiv:1511.06018, 2015.\nScott W Linderman, Andrew C Miller, Ryan P Adam, David M Blei, Liam Paninski, and Matthew .\nJohnson. Recurrent switching linear dynamical systems. arXiv preprint arXiv: 1610.08466, 201\u20ac\nKevin P Murphy. Hidden semi-markov models (hsmms). 2002.\nKevin P. Murphy. Machine learning: a probabilistic perspective. MIT Press, 2012.\nJorge-L Reyes-Ortiz, Luca Oneto, Albert Sama, Xavier Parra, and Davide Anguita. Transition-aware\nhuman activity recognition using smartphones. Neurocomputing, 171:754\u2014767, 2016.\nShun-Zheng Yu. Hidden semi-markov models. Artificial Intelligence, 174(2):215-243, 2010.\nArnold Zellner. Optimal Information Processing and Bayes\u2019s Theorem. The American Statistician\n42(4), November 1988.\nAndriy Mnih and Danilo J Rezende. Variational inference for monte carlo objectives. arXiv preprint\narXiv: 1602.06725, 2016.\nTapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary\nstochastic feedforward neural networks. arXiv preprint arXiv: 1406.2989, 2014.\nShun-Zheng Yu and Hisashi Kobayashi. An efficient forward-backward algorithm for an explicit-\nduration hidden markov model. Signal Processing Letters, IEEE, 10(1):11\u201414, 2003."}, {"section_index": "7", "section_name": "\\ OPTIMIZING DYNAMIC PROGRAMMING", "section_text": "It is easy to see that the memory consumption is O(|a|K).\nCaching emission probability At each time step t, we compute P(214,-|%t:14r\u20141, 2 = J) for each\nj \u20ac Zand r \u20ac D. That is to say, we compute all the emission probabilities of observations starting\nfrom time \u00a2, and within max possible duration D. This can be done by performing feed-forward\nof K RNNs. After that, storing these results will require O( KD) space. For simplicity, we let\n=P ttr\u20141,2 = j), where e! \u20ac RK*P\nef = P(ti4r|@t:t4r\u20141, 2 = J), where e* \u20ac ."}, {"section_index": "8", "section_name": "A.2 SQUEEZE THE TIME COMPLEXITY", "section_text": "In Eq. [13] the most expensive part is when r = 1 and t > 1. If we solve this in a naive way, then this\nstep would require O(|a| 2D) for time complexity, which is quite expensive.\n: 1 .\nai(7,r) =max max ar, (i,7\") + Toa log(Ai,j Bj.1P(1\\2 = J)\n\nr .\n+ THA log Qy (Zt-r41 = 9, di-r41 = 7|@)\n. 1 .\n=max y\u2014a(i) + Py los(Aig Bi P(ael2 = J)\nr .\n+ THA log Qu (4t-r4i = Jj, dir = 7/2)\nThis reduces the complexity to be O(|a| 7).\nIn this section, we show that the Eq. [13}can be computed in a memory efficient way. Specifically, the\ndynamic programming procedure can be done with O(|a|/\u00b0) memory requirement, and caching for\nprecomputed emission probabilities requires O(D?2.K) memory space.\nUpdate forward variable a Note that in Ea.{13| when r > 1, we can update a,(j, 7) deterministi-\ncally. So it is not necessary to keep the records for r > 1.\nSpecifically, let\u2019s only record a;(j, 1), and do the updates in a similar way as in Fa The only\ndifference is that, when constructing the answer, i.e., the last segment solution, we need to do a loop\nover all possible z and d in order to find the best overall segmentation solution.\nNote that, at a certain time step t, we would require the emission probability of observations\nP(a;\\0-,414-1, 2 = j) for some j \u20ac Zand r \u20ac D. In this case, the corresponding first observation\nis 2,_,. That is to say, we should keep e\u2019~?*1,...,e' at time step t. This makes the memory\nconsumption goes to O(K D?)\nHere we adopt similar technique as in/Yu & Kobayashi| (2003). Let 7:(\u00a2) = max,ep ay_1(i, 7\u2019),\n\nthen we can get\n200\n\n400\n\n600\n\n1000\n\n1400\n\n400\n\n800\nFigure 5: More reconstruction illustration on Sine dataset.\n4 4\n2\n0 o\n2 2\n4 4\n8 6\n200 400 600 1000 1400 200 400 00 1200\n4\nBe 8 2\ne? je\n4\n4\nFigure 6: More reconstruction illustration on Gaussian Process dataset.\nThe reconstructed signals from the original signals are shown in Fig. |5jand Fig. [6] for sine datase\nand gaussian Process dataset respectively. We can see the reconstructed signal almost recovered the\noriginal signal. The RNN captured the key differences of states, such as the frequency and scale\n\nwhile in gaussian process dataset, it also recovered the complicated pattern involving long tern\ndependencies.\nWe show the confusion matrix of all methods on synthetic sine and gaussian process dataset i\nFigure[7]and Figure|8]respectively."}, {"section_index": "9", "section_name": "B.2> HUMAN ACTIVITY", "section_text": "The confusion matrices of our method and two baseline algorithms are shown in Figure]9|\nn Figure[10] we also show several other segmentation results on different testing sequences"}, {"section_index": "10", "section_name": "B.3. DROSOPHILA", "section_text": "The confusion matrices of our method and two baseline algorithms are shown in Figure[11\nSince each sequence is too long to be clearly shown in one figure, we split the segmentation results of\none sequence into four parts, and show them in Figure/12]\nee ee |\n\n(a)rHSMM-dp (b) rHSMM-fw (c)subHSMM_\u2014s (d) HSMM_\u2014(e) HDP-HSMM_(f) CRF-AE\n\nFigure 7: Confusion matrix on Synthetic Sine dataset.\n\n|\n\n(a)rHSMM-dp (b) rHSMM-fw (c)subHSMM_\u2014s (d) HSMM_\u2014(e) HDP-HSMM_(f) CRF-AE\n\nFigure 8: Confusion matrix on Synthetic Gaussian Process dataset.\nAlso, we split the segmentation results of one sequence into four parts, and show them in Figure\nFigure 8: Confusion matrix on Synthetic Gaussian Process dataset.\nFigure 9: Confusion matrix on Human Activity dataset.\nnnn i Te | a.\n= | S&H | 1m | | HH\nLe ne on oa\nmlm) Hit\n\nCe\nul a8\n\nP| VEE\nFigure 10: More segmentation results on Human Activity dataset.\n(a) rHSMM-dp (b) rHSMM-fw (c) subHSMM_\u2014s(d) HSMM_\u2014s(e) HDP-HSMM_\u2014(f) CRF-AE\nfigure 11: Confusion matrix on Drosophila dataset.\nFigure 12: More segmentation results on Drosophila dataset.\nee ae |\n\n(a) tHSMM-dp (b)rHSMM-fw (c) subHSMM_~\u2014(d) HSMM_\u2014s (ec) HDP-HSMM_\u2014((f) CRF-AE\nFigure 13: Confusion matrix on Heart Sound dataset.\n1 00\n0 0\nOULNUAIN HIE 1\nFigure 14: More segmentation results on Heart Sound dataset\n| al\n\nAN MA\n\ni\n\nGilisystoe MEE S2\n\nGDiastole MNNSt\n\nTUL TIM.IDAQ.QQOUQUQ CDOT ACUI UMUOUOO0U\n\nFi\n4\n\nGllsystole ME S2\n\nGDiastole ES!\n\n| TIUUADULIIVMUVT UT UUUUTLAI UATE WLC ULL\n\nFi\n4\n\n\u2018LI\n\nGllsystole ME S2\n\nGDiastole MNNSt\n\nGDiastole ES!\n\nNNT TUM A TT OT\n11\n|\na i\ni\n11\nNNN LY\n\nil\nil"}]
rky3QW9le
[{"section_index": "0", "section_name": "[TRANSFORMATIONAL SPARSE CODINC", "section_text": "Dimitrios C. Gklezakos & Rajesh P. N. Rao\n{gklezd, rao}@cs.washington.edt\nA fundamental problem faced by object recognition systems is that objects anc\ntheir features can appear in different locations, scales and orientations. Current\ndeep learning methods attempt to achieve invariance to local translations via pool-\ning, discarding the locations of features in the process. Other approaches explic-\nitly learn transformed versions of the same feature, leading to representations that\nquickly explode in size. Instead of discarding the rich and useful informatior\nabout feature transformations to achieve invariance, we argue that models shoulc\nlearn object features conjointly with their transformations to achieve equivariance\nWe propose a new model of unsupervised learning based on sparse coding that\ncan learn object features jointly with their affine transformations directly fror\nimages. Results based on learning from natural images indicate that our approact\nmatches the reconstruction quality of traditional sparse coding but with signifi-\ncantly fewer degrees of freedom while simultaneously learning transformations\nfrom data. These results open the door to scaling up unsupervised learning tc\nallow deep feature+transformation learning in a manner consistent with the ven-\ntral+dorsal stream architecture of the primate visual cortex."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A challenging problem in computer vision is the reliable recognition of objects under a wide rang\u00a2\nof transformations. Approaches such as deep learning that have achieved success in recent year:\nusually require large amounts of labeled data, whereas the human brain has evolved to solve the\nproblem using an almost unsupervised approach to learning object representations. During early\ndevelopment, the brain builds an internal representation of objects from unlabeled images that car\nbe used in a wide range of tasks.\nMuch of the complexity in learning efficient and general-purpose representations comes from the\nfact that objects can appear in different poses, at different scales, locations, orientations and lighting\nconditions. Models have to account for these transformed versions of objects and their features. Cur.\nrent successful approaches to recognition use pooling to allow limited invariance to two-dimensiona\ntranslations (Ranzato et al.|(2007)). At the same time pooling discards information about the loca\ntion of the detected features. This can be problematic because scaling to large numbers of object:\nrequires modeling objects in terms of parts and their relative pose, requiring the pose information tc\nbe retained.\nPrevious unsupervised learning techniques such as sparse coding (Olshausen & Field]\n\nlearn features similar to the ones in the visual cortex but these models have to explicitly learn large\nnumbers of transformed versions of the same feature and as such, quickly succumb to combinatorial\nexplosion, preventing hierarchical learning. Other approaches focus on computing invariant object\n\nsignatures (Anselmi et al.](2013}/2016)), but are completely oblivious to pose information.\nIdeally, we want a model that allows object features and their relative transformations to be si.\nmultaneously learned, endowing itself with a combinatorial explanatory capacity by being abl\nto apply learned object features with object-specific transformations across large numbers of ob-\njects. The goal of modeling transformations in images is two-fold: (a) to facilitate the learning o!"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "pose-invariant sparse feature representations, and (b) to allow the use of pose information of object\nfeatures in object representation and recognition.\nI~ Fw s.t. w is sparse\nSparsity is usually enforced by the appropriate penalty. A typical choice is S1(w) = ||w||1. We can\nenhance sparse coding with affine transformations by transforming features before combining them.\nThe vectorized input image J is then modeled as:\nK\nI= Ss wel (rp) Fy\nk=1\nwhere w;, F;, denote the k-th weight specific to the image and the k-th feature respectively and\nT(zx,.) is a feature and image specific transformation.\nIn modeling image transformations we follow the approach of|Rao & Ruderman) (1999) and[Miao &\n[Rao|{2007). We consider the 2D general affine transformations. These include rigid motions such as\nvertical and horizontal translations and rotations, as well as scaling, parallel hyperbolic deformations\nalong the X/Y axis and hyperbolic deformations along the diagonals. A discussion on why these\nare good candidates for inclusion in a model of visual perception can be found in|Dodwell| (1983).\nFigure|5]in Appendix{A]shows the effects of each transformation.\nT(x) = e&5 8145\nAlthough this model ties sparse coding with transformations elegantly, learning large transforma-\ntions with it is intractable. The error surface of the loss function is highly non-convex with many\nshallow local minima. Figures|1 show the surface of L as a function of horizontal and\nvertical translation, horizontal translation and rotation and vertical translation and rotation parame-\nters. The model tends to settle for small transformations around the identity. Due to the size of the\nparameters that we need to maintain, a random restart approach would be infeasible.\nWe introduce Transformational Sparse Coding Trees to circumvent this problem using hierarchies\nof transformed features. The main idea is to gradually marginalize over an increasing range of\nWe propose a new model of sparse coding called transformational sparse coding that exploits a tree\nstructure to account for large affine transformations. We apply our model to natural images. We\nshow that our model can extract pose information from the data while matching the reconstruction\nquality of traditional sparse coding with significantly fewer degrees of freedom. Our approach to\nunsupervised learning is consistent with the concept of \u201ccapsules\u201d first introduced by|Hinton et al.\n(2011), and more generally, with the dorsal-ventral (features+transformations) architecture observed\nSparse coding (Olshausen & Field|{1997)) models each image I as a sparse combination of features:\nAny subset of these transformations forms a Lie group with the corresponding number of dimensions\n(6 for the full set). Any transformation in this group can be expressed as the matrix exponential of a\nweighted combination of matrices (the group generators) that describe the behaviour of infinitesimal\ntransformations around the identity:\nFor images of M pixels, T(x) is a matrix of size M x M. Note that the generator matrices and\nthe features used are common across all images. The feature weights and transformation parameters\ncan be inferred (and the features learned) by gradient descent on the regularized MSE objective:\n2\n\n+ AwSi(w) + Ar|| FF\n2\n\nN\n\n1\nL(w,2,F) = yO\n\ni=l\n\nK\nI,- Ss wikl (rik) Fr\n\nk=1\nHorizontal Translation Parameter\n\na\n\n(a)\n\n24 0 1 2\n\nHorizontal Translation Parameter\n\n(d)\n\n24 0 1 2 10 1\nHorizontal Translation Parameter Vertical Translation Parameter\n\n(b) (c)\n\n03\n\noa\n\n2042 0 102 1 0 1 2\n\nHorizontal Translation Parameter\n\nVertical Transaton Parameter\n\n(e) (f)\nFigure 1: Normalized reconstruction error for individual vs. batch 8 x 8 natural image patches.\n(a),(b),(c) show the surface of the reconstruction error for horizontal and vertical translations, hor-\nizontal translations and rotation, vertical translations and rotations for an individual data point and\nfeature. (d),(e),(f) show the same, averaged over a batch of 2000 data points. The error is normalized\nbetween 0 and 1 for comparison. The global minimum in the range is marked in red. In the batch\ncase, averaging makes the error surface smoother and learning easier.\nVv\nI~) Ss wep\n\nv=1 bvch(v)\nwhere U, = T(x\u00bb) Fy and ch(v) the children of root v. The feature U; is a leaf, derived fron\nthe root feature F, via the fixed (across all data-points) transformation T'(z,_,\u00bb). Deeper trees car\nbe built accordingly (Section[3.3). A small example of a tree learned from natural image patches i:\nshown in Figure[2]\nThere are multiple advantages to such a hierarchical organization of sparse features. Some transfor-\nmations are more common in data than others. Each path in the tree corresponds to a transformation\nthat is common across images. Such a path can be viewed as a \u201ctransformation feature\u201d learned\nfrom the data. Each additional node in the tree \u201ccosts\u201d a fixed set of new parameters equal in size to\nthe dimensions of the underlying Lie group (six in our case). At the same time the node contributes\na whole new feature to the sparse code. Averaging over many data points, smoothens the surface\nof the error function and makes larger transformations more accessible to optimization. Figures\nI(d) show the error surface averaged over a batch of 2000 patches.\nFor every leaf that is activated, the root template represents the identity of the feature and the trans.\nformation associated with the path to the root, the pose. In other words the tree is an equivarian\nrepresentation of the feature over the parameter region defined by the set of paths to the leaves, very\n\nsimilar to the concept of a capsule introduced by (2011). In fact, every increasing\n\nsubtree corresponds to a capsule of increasing size.\ntransformations. Each node in the tree represents a feature derived as a transformed version of its\nparent, with the root being the template of the feature. The leaves are equivalent to a set of sparse\nbasis features and are combined to reconstruct the input as described above. A version of the model\nusing a forest of trees of depth one (flat trees), is given by:\nFigure 2: Example of a tree learned from natural image patches. The leaves correspond to rigid\ntransformations of the root.\nThe reconstruction mean squared-error (MSE) for a forest of flat trees is given by\nN\n\nVv\n1\nLuse(w,2,F) = Wo I, - Ss Ss wil (y+) Fr\n\ni=1 v=1 bxch(v)\nIncreasing the feature magnitudes and decreasing the weights will result in a decrease in loss. We\nconstraint the root feature magnitudes to be of unit 2 norm. Consider different, transformed, ver.\nsions of the same root template. For every such version there is a set of tree parameters that com:\npensates for the intrinsic transformation of the root and results in the same leaves. To make the\nsolution unique we directly penalize the transformation parameter magnitudes. Since scaling anc\nparallel deformation can also change the magnitude of the filter, we penalize them more to keer\nfeatures/leaves close to unit norm. The full loss function of the model is:\nL(w, 2, F) = Lyse(w, 2, F) + AwSi(w) + Ss AX\n\nj=l\nVo, ||Folle =1\nwhere X/,) is the vector of the collective parameters for generator G;.\nuse an alternating optimization approach to sparse coding. First the weights are\ninferred using the feature sign algorithm and then the features are learned using a Lagrange dual\napproach. We use the same approach for the weights. Then we optimize the transformation param-\neters using gradient descent. The root features can be optimized using the analytical solution and\nprojecting to unit norm.\nThe matrix exponential gradient od can be computed using the following formula (Ortiz et <\n(90015): \"\ndeAM 1\n\u2014 f wa(e OACt\n7 fe o) 418 0-40 da\nwhere D(a) = e*A) a e@-%A) | For our Day) Th we approximated the gradient by draw-\ning afew samples|']{a, a , and computing Ls [D(a)]. This can be regarded as a stochastic version\n\nof the approach used by{Culpepper & Olshausen| {2009}\nSome features might get initialized near shallow local optima (i.e. close to the borders or outside the\nreceptive field). These features eventually become under-used by the model. We periodically check\n\u2018In practice even a single sample works well. The computation over samples is easily parallelizable\n(a)\n\nLeaves\nE.~u(0,1) [D(a)}\nfor under-used features and re-initialize their transformation parameters |*} For re-initialization we\nselect another feature in the same tree at random with probability proportional to the fraction of data\npoints that used it in that batch. We then reset the transformation parameters at random, with small\nvariance and centered around the chosen filter\u2019s parameters."}, {"section_index": "3", "section_name": "3.1 LEARNING REPRESENTATIONS", "section_text": "We apply transformational sparse coding (TSC) with forests of flat trees to natural image patches.\nOur approach allows us to learn features resembling those of traditional sparse coding. Apart from\nreconstructing the input, the model also extracts transformation parameters from the data. Figure\nshows a reconstruction example. Figure |4}shows the root features learned from 10 x 10 natural\nimage patches using a forest of size 8 with branching factor 8, equipped with the full six-dimensional\ngroup. The forest has a total of 64 features. Figure[4(a)|shows the features corresponding to the roots.\nFigure|4(b)|shows the corresponding leaves. Each row contains features derived from the same root.\nMore examples of learned features are shown in Figures[7]|8}[9]and/10]in Appendix|??]\nFigure 3: Reconstruction example. The root features are transformed and combined with different\nweights to reconstruct (bottom right) the 8 x 8 natural image patch in the top right corner.\nEven though derivative features have to be explicitly constructed for inference, the degrees of free-\ndom of our model are significantly lower than that of traditional sparse coding. Specifically:\nWe compare transformational sparse coding forests of various layouts and choices for A with tra\nditional sparse coding on 10 x 10 natural image patches. Some transformations change the featur\nmagnitudes and therefore the sparsity pattern of the weights. To make the comparison clearer, fo\neach choice of layout and penalty coefficient, we run sparse coding, constraining the feature mag\nnitudes to be equal to the average feature magnitude of our model. The results are shown in Tabl\nThe reconstruction error of our model is close to that of sparse coding, albeit with slightly les\nsparse solutions, even though it has significantly fewer degrees of freedom. Our model extracts pos'\ninformation in the form of group parameters.\n\u201cA feature is under-used when the total number of data-points using it in a batch drops close to zero\nsoon\n\n(0.0527) (0.4183) (-0.4114)\n(a)\nNote that the group dimension is equal to 3 for rigid motions and 6 for general 2D affine transfor-\nmations.\nFigure 4: Learned features for 8 trees with a branching factor of 8. (a) Features corresponding to the\nroots. (b) Features/Leaves: Each row corresponds to leaves/transformations of the same root.\nTable 1: Comparison of transformational sparse coding (TSC) with sparse coding (SC) for 10 x 10\nnatural image patches. We compare the error (MSE) and the degrees of freedom (df) over 40000\ndata points. \u201cSparsity\u201d is the average number of non-zero weights. X,, is the penalty coefficient for\nthe weights and controls the sparseness of the solution.\nTSC SC\nAw Layout MSE Sparsity dfrsc MSE Sparsity dfsc \u2014 #of features ms\n04 1x64 = 2.13 13.3 447 1.71 12.3 6336 64 14.17\n05 1x128 2.28 12.1 867 1.96 10.3 12672 128 14.62\n0.4 8x8 1.89 13.3 1176 1.72 12.5 6336 64 5.38\n04 4x16 1.91 13.3 780 1.69 12.3 6336 64 8.12\n0.5 8x8 2.36 10.4 1176 2.15 9.9 6336 64 5.38\n05 4x16 2.38 11 780 2.12 10.0 6336 64 8.12\n04 16x16 1.66 14.3 3120 1.56 13.2 25344 256 8.12\n04 8x32 1.67 14.6 2328 1.56 13.2 25344 256 10.88\nWe can define deeper trees by associating a set of transformation parameters with each branch\nThese correspond to additive contributions to the complete transformation that yields the leaf wher\napplied to the root:\nrn 2\n\ntt\n\n> wl (xp)F,\nch(v)\nwhere rp = eee path(b,v) x_e. Optimizing deeper trees is more demanding due to the increased\nnumber of parameters. Their advantage is that they lend structure to model. The parameters cor-\nresponding to the subtree of an internal node tend to explore the parameter subspace close to the\ntransformation defined by that internal node. In tasks where it is disadvantageous to marginal-\nize completely over transformations, equivariant representations corresponding to intermediate tree\nlayers can be used. An example of such structure is shown in Figure/6lin Appendix |B]\n(2010) present a model for fitting Lie groups to video data. Their approach only\nworks for estimating a global transformation between consecutive video frames. They only support\ntransformations of a single kind (ie only rotations). Different such single-parameter transformations\nhave to be chained together to produce the global one. The corresponding transformation parameters\nalso have to be inferred and stored in memory and cannot be directly converted to parameters of a\nsingle transformation. [Kokiopoulou & Frossard] (2009) present an approach to optimally estimating\ntransformations between pairs of images. They support rigid motions and isotropic scaling.\nfocus on learning the group operators and transformation parameters from\npairs of images, but do not learn features from data. Our model supports all six transformations\nand learns object parts and their individual transformations. In contrast with those approaches, our\nmodel learns object parts jointly with their transformations within the same image. Our model uti-\nlizes the full, six-dimensional, general affine Lie group and captures the pose of each object part in\nthe form of a single set of six transformation parameters.\npropose a bilinear model that combines sparse coding with transformations\nThe model accounts for global transformations that apply to the entire image region. Our mode:\naccounts for individual transformations of image parts. [Rao & Ballard] (1998) propose a model thai\ncaptures small image transformations with Lie groups using a first-order Taylor approximation. Out\nmodel estimates larger transformations of image parts using the full exponential model. {Rao\n\nRuderman (1999) and|Miao & Rao (2007) use a first-order Taylor approximation to learn the grouy\nonerators and the transformation parameters for small transformations.\nThe work closest to ours is that of/Hinton et al.\n\n1) on capsules. A capsule learns to recognize it\ntemplate (feature) over a wide range of poses. The pose is computed by a neural network (encoder)\nThe decoder, resembling a computer graphics engine combines the capsule templates in differen\nposes to reconstruct the image. Each transformational sparse coding tree can be thought of as cap\nsule. The template corresponds to the root. The tree learns to \u201crecognize\u201d transformed version:\nof that template. Our work arrives at the concept of a capsule from a sparse coding perspective\nA major difference is that our approach allows us to reuse each feature multiple times in different\ntransformed versions for each data point.\npropose a convolutional network that captures symmetries in the data\nby modeling symmetry groups. Experiments with rigid motions or various affine transformations\nshow reduced sample complexity. propose a convolutional network that can\nhandle translations, reflections and rotations of 90 degrees. propose a network\n\nthat handles translations and rotations. All the above are supervised learning models and apart from\nthe first can handle a limited set of transformations. Our model is completely unsupervised, extends\nsparse coding and can handle all transformations given by the first order differential equation:"}, {"section_index": "4", "section_name": "5 CONCLUSION", "section_text": "In this paper, we proposed a sparse coding based model that learns object features jointly with thei\ntransformations, from data. Naively extending sparse coding for data-point specific transformation:\nmakes inference intractable. We introduce a new technique that circumvents this issue by using\ntree structure that represents common transformations in data. We show that our approach can learr\ninteresting features from natural image patches with performance comparable to that of traditiona\nsparse coding.\nInvestigating the properties of deeper trees, learning the tree structure dynamically from the data and\nextending our model into a hierarchy are subjects of ongoing research."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Fabio Anselmi, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, and Tomaso A\nPoggio. Unsupervised learning of invariant representations in hierarchical architectures. CoRR\nabs/1311.4158, 2013. URL/http: //arxiv.org/abs/1311.4158\nDavid B. Grimes and Rajesh P. N. Rao. Bilinear sparse coding for invariant vision. Neural Comput.\n17(1):47-73, January 2005. ISSN 0899-7667. doi: 10.1162/0899766052530893. URL|http:\n7/Aax.doi.org/10.1162/0899766052530893)\nBruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set: A strategy\nemployed by v1? Vision research, 37(23):3311\u20143325. 1997.\nM. Ortiz, R. A. Radovitzky, and E. A. Repetto. The computation of the exponential and logarithmi:\nmappings and their first and second linearizations. International Journal for Numerical Method\nin Engineering, 52:1431, December 2001. doi: 10.1002/nme.263.\nBenjamin Culpepper and Bruno A. Olshausen. Learning transport operators for im-\nage manifolds. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I Williams,\nand A. Culotta (eds.), Advances in Neural Information Processing Systems 22, pp.\n\n423-431. Curran Associates, Inc., 2009. URL http://papers.nips.cc/paper/\n\n3791-learnina-transpvort-\u2014overators-\u2014for-image-manifolds.vdf\nRobert Gens and Pedro Domingos. Deep symmetry networks. In Proceedings of the 27th Inter-\nnational Conference on Neural Information Processing Systems, NIPS\u2019 14, pp. 2537-2545, Cam-\n\nbridge, MA, USA, 2014. MIT Press. URL|http://dl.acm.org/citation.cfm?id=\n2969033.2969110\nHonglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. Efficient sparse coding algorithms.\nIn B. Sch\u00e9lkopf, J. C. Platt, and T. Hoffman (eds.), Advances in Neural Information Process-\ning Systems 19, pp. 801-808. MIT Press, 2007. URL htt /papers.nips.cc/paper/\n2979-efficient-\u2014sparse-coding-algorithms.pdf\nXu Miao and Rajesh P. N. Rao. Learning the Lie Groups of Visual Invariance. Neural Comput.,\n19(10):2665\u20142693, October 2007. ISSN 0899-7667. doi: 10.1162/neco.2007.19.10.2665. URL\nIhnttp://dx.doi.org/10.1162/neco.2007.19.10.2665)\nMarc\u2019 Aurelio Ranzato, Fu-Jie Huang, Y-Lan Boureau, and Yann LeCun. Unsupervised learning of\ninvariant feature hierarchies with applications to object recognition. In Proc. Computer Vision\nand Pattern Recognition Conference (CVPR\u201907). IEEE Press, 2007.\nRajesh P. N. Rao and Daniel L. Ruderman. Learning lie groups for invariant visual perception.\nIn Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems\nII, pp. 810-816, Cambridge, MA, USA, 1999. MIT Press. ISBN 0-262-11245-0. URL|http:\n//dl.acm.org/citation.cfm? 40534.340807"}, {"section_index": "6", "section_name": "B DEEPER TREES AND STRUCTURE", "section_text": "Figure[6|presents an example of structure learned by deeper trees. This example consists of vertical\nand horizontal lines. Each image patch is either blank, contains one vertical or one horizontal line\nor both. A patch is blank with probability 3, contains exactly one line with probability 2 or two\nlines with probability 2. Each line is then generated at one of eight positions at random. Fitting two\nbinary trees results in some continuity in the features, whereas flat trees provide no such structure.\nJascha Sohl-Dickstein, Jimmy C. Wang, and Bruno A. Olshausen. An unsupervised algorithm for\nlearning lie group transformations. CoRR, abs/1001.1027, 2010. URL. http: //arxiv.org/\nFigure [5] presents the effects of each individual transformation of the six that are supported by out\nmodel. The template is a square}\n(a)\n\nb\n\n(b)\n\"F880 8 Bea ab poospag\n(\n(d)\n(\n\nHOODOO Ocgieeoo ao o\\n\n\n)\n)\nFigure 5: Effects of each individual transformation on the template (a): (b) horizontal translation,\n(c) vertical translation, (d) rotation, (e) scaling, (f) parallel hyperbolic deformation along the X/Y\naxis, (g) hyperbolic deformation along the diagonals. To compute the generators, we used the sinc\ninterpolation function.\nFigure 6: Features learned for the double-line example: (a) Input, (b) features learned by a forest\nof two flat trees of size eight, (c) features learned by two binary trees of the same size. For (c)\nthe leaves have been reordered with subtree permutations to reveal the order. Each subtree learns\nfeatures corresponding to an area of the input.\nFigure 7: Learned features for 16 trees with branching factor 16. Each row corresponds to\nleaves/transformations of the same root.\na\n1 0 el es\nFigure 8: Learned features for 8 trees with branching factor 32. Each row corresponds to\nleaves/transformations of the same root.\nFigure 9: Learned features for 4 trees with branching factor 16. Each row corresponds to\nleaves/transformations of the same root.\nFigure 10: Learned features for 1 tree with branching factor 64. All features are transformations of\nthe same root."}]
SJIMPr9eg
[{"section_index": "0", "section_name": "BOOSTED RESIDUAL NETWORKS", "section_text": "Alan Mosca & George D. Magoulas\nDepartment of Computer Science and Information System:\nBirkbeck, University of London\nMalet Street, WC1E 7HX, London, UK\nfa.mosca, gmagoulas}@dcs.bbk.ac.uk"}, {"section_index": "1", "section_name": "| INTRODUCTION", "section_text": "Residual Networks, a type of deep network recently introduced in (2015a), are character:\nized by the use of shortcut connections (sometimes also called skip connections), which connec\nthe input of a layer of a deep network to the output of another layer positioned a number of level:\n\u201cabove\u201d it. The result is that each one of these shortcuts shows that networks can be build in blocks\nwhich rely on both the output of the previous layer and the previous block.\nResidual Networks have been developed with many more layers than traditional Deep Networks\nin some cases with over 1000 blocks, such as the networks in[He et al] (2016). A recent study i1\ncompares Residual Networks to an ensemble of smaller networks. This is don\nby unfolding the shortcut connections into the equivalent tree structure, which closely resembles at\nensemble. An example of this can be shown in Figure[I]\nFigure 1: A Residual Network of N blocks can be unfolded into an ensemble of 2Y \u2014 1 smaller\nnetworks.\nDense Convolutional Neural Networks are another type of network that make:\nuse of shortcuts, with the difference that each layer is connected to all its ancestor layers directly bj\na shortcut. Similarly, these could be also unfolded into an equivalent ensemble.\nTrue ensemble methods are often left as an afterthought in Deep Learning models: it is generally\nconsidered sufficient to treat the Deep Learning method as a \u201cblack-box\u201d and use a well-knowr\ngeneric Ensemble method to obtain marginal improvements on the original results. Whilst this is\nan effective way of improving on existing results without much additional effort, we find that it car\namount to a waste of computations. Instead, it would be much better to apply an Ensemble methoc\nthat is aware, and makes us of, the underlying Deep Learning algorithm\u2019s architecture.\nWe define such methods as \u201cwhite-box\u201d Ensembles, which allow us to improve on the generalisation\nand training speed compared to traditional Ensembles, by making use of particular properties of the"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "n this paper we present a new ensemble method, called Boosted Residual Net-\nvorks, which builds an ensemble of Residual Networks by growing the member!\n1etwork at each round of boosting. The proposed approach combines recent de-\nyelopements in Residual Networks - a method for creating very deep networks by\nncluding a shortcut layer between different groups of layers - with the Deep Incre-\nnental Boosting, which has been proposed as a methodology to train fast ensem-\nles of networks of increasing depth through the use of boosting. We demonstrate\nhat the synergy of Residual Networks and Deep Incremental Boosting has bette:\notential than simply boosting a Residual Network of fixed structure or using the\n\u2018quivalent Deep Incremental Boosting without the shortcut layers.\nThe next section presents the background on Deep Incremental Boosting. Then the proposed\nBoosted Residual Networks method is described. Experiments and results are discussed next, and\nthe paper ends with conlusions.\nDeep Incremental Boosting, introduced in|Mosca & Magoulas (2016a), is an example of such white-\n\nbox ensemble method developed for build\nmakes use of principles from transfer of |\n\ning ensembles Convolutional Networks. The method\nearning, like for example those used in |Yosinski et al\n\n(2014), applying them to conventional AdaBoost (Schapirel (1990)). Deep Incremental Boosting\nincreases the size of the network at each round by adding new layers at the end of the network.\nallowing subsequent rounds of boosting to run much faster. In the original paper on Deep Incremen-\ntal Boosting|Mosca & Magoulad woisd, this has been shown to be an effective way to learn the\ncorrections introduced by the emphatisation of learning mistakes of the boosting process. The argu-\nment as to why this works effectively is based on the fact that the datasets at rounds s and t + 1 will\n\nbe mostly similar, and therefore a classifier\ndataset X, will also perform better than rai\nthe assumption that both datasets are samp\n\nh, that performs better than randomly on the resampled\nndomly on the resampled dataset X;41. This is under\ned from a common ancestor set X,. It is subsequently\n\nshown that such a classifier can be re-trained on the differences between X; and X;.1.\nThis practically enables the ensemble algorithm to train the subsequent rounds for a considerably\nsmaller number of epochs, consequently reducing the overall training time by a large factor. The\noriginal paper also provides a conjecture-based justification for why it makes sense to extend the\npreviously trained network to learn the \u201ccorrections\u201d taught by the boosting algorithm. A high level\ndescription of the method is shown in Algorithm[]] and the structure of the network at each round is\nillustrated in Figure]\nAlgorithm 1 Deep Incremental Boosting\nbase classifier\u2019s learning algorithm and architecture. We propose a new such method, which we\ncall Boosted Residual Networks, which makes use of developments in Deep Learning, previous\nother white-box Ensembles and combines several ideas to achieve improved results on benchmark\ndatasets.\nUsing a white-box ensemble allows us to improve on the generalisation and training speed by making\nuse of the knowledge of the base classifier\u2019s structure and architecture. Experimental results show\nthat Boosted Residual Networks achieves improved results on benchmark datasets.\n(rere\nFigure 2: Illusration of subsequent rounds of DIB\nIn this section we propose a method for generating Boosted Residual Networks. This works by\nincreasing the size of an original residual network by one residual block at each round of boosting.\nThe method achieves this by selecting an injection point index p; at which the new block is to be\nadded, which is not necessarily the last block in the network, and by transferring the weights from\nthe layers below \u00bb; in the network trained at the previous round of boosting.\nBecause the boosting method performs iterative re-weighting of the training set to skew the resampl\nat each round to emphasize the training examples that are harder to train, it becomes necessary t\u00a2\nutilise the entire ensemble at test time, rather than just use the network trained in the last round\nThis has the effect that the Boosted Residual Networks cannot be used as a way to train a single\nResidual Network incrementally. However, as we will discuss later, it is possible to alleviate thi:\nsituation by deriving an approach that uses bagging instead of boosting; therefore removing the\nnecessity to use the entire ensemble at test time. It is also possible to delete individual blocks fron\na Residual Network at training and/or testing time, as presented in (2015a), however thi:\nissue is considered out of the scope of this paper.\nThe iterative algorithm used in the paper is shown in Algorithm [2] At the first round, the entire\ntraining set is used to train a network of the original base architecture, for a number of epochs no.\nAfter the first round, the following steps are taken at each subsequent round t:\nFigureB]shows a diagram of how the Ensemble is constructed by deriving the next network at eact\nround of boosting from the network used in the previous round.\nThe ensemble constructed so far is evaluated on the training set to obtain the set errors \u20ac, so\nthat a new training set can be sampled from the original training set. This is a step common\nto all boosting algorithms.\n\nA new network is created, with the addition of a new block of layers B,,,, immediately\nafter position p;, which is determined as an initial pre-determined position po plus an offset\ni * Op for all the blocks added at previous layers. This puts the new block of layers im-\nmediately after the block of layers added at the previous round, so that all new blocks are\neffectively added sequentially.\n\nThe weights from the layers below p, are copied from the network trained at round \u00a2 \u2014 1\nto the new network. This step allows to considerably shorten the training thanks to the\ntransfer of learning shown in (2014).\n\nThe newly created network is subsequently trained for a reduced number of epochs nis.\n\nThe new network is added to the ensemble following the traditional rules and weight a;\nused in AdaBoost.\nAlgorithm 2 Boosted Residual Networks\nSA Pe Jt Pit ips foes\nA 8 8 @\nFigure 3: Illusration of subsequent rounds of BRN\nWe identified a number of optional variations to the algorithm that may be implemented in practice\nwhich we have empirically established as not having an impact on the overall performance of the\nnetwork. We report them here for completeness.\ne Freezing the layers that have been copied from the previous round.\n\ne Only utilising the weights distribution for the examples in the training set instead of resam\npling, as an input to the training algorithm.\n\ne Inserting the new block always at the same position, rather than after the previously\nincerted hlack (we fonnd thic to affect nerfoarmance neoativelv)"}, {"section_index": "3", "section_name": "3.1 COMPARISON TO APPROXIMATE ENSEMBLES", "section_text": "In the case of Densely Connected Convolutional Networks (DCCN) specifically, one may argu\nthat a partial unfolding of the network could be, from a schematic point of view, very similar to a\nensemble of incrementally constructed Residual Networks. We make the observation that, althoug\nthis would be correct, on top of the benefit of diversity, our method also provides a much faste\ntraining methodology: the only network that is trained for a full schedule is the network create\nat the first round, which is also the smallest one. All subsequent networks are trained for a muc\nshorter schedule, saving a considerable amount of time. Additionally, while the schematic ma\nseem identical, there is a subtle difference: each member network outputs a classification of its owr\nwhich is then aggregated by weighted averaging, whilst ina DCCN the input of the final aggregatio\nlayer is the output of each underlying set of layers. We conjecture that this aggressive dimensionalit\nreduction before the aggregation will have a regularising effect on the ensemble.\nSingle Net | AdaBoost DIB BRN\n\nMNIST 99.41 % 99.41% 99.47% 99.53 %\nCIFAR-10 89.12 % 89.74% 90.83% 90.85 %\nCIFAR-100 | 67.25 % 68.18% 68.56% 69.04 %\nTable 1: Test accuracy in the three bencharks for the methods compared\nIn the experiments we used the MNIST, CIFAR-10 and CIFAR-100 datasets, and compared Boosted\nResidual Networks (BRN) with an equivalent Deep Incremental Boosting (DIB) without the skip-\nconnections, AdaBoost with the equivalent Residual Network as its base classifier (AdaBoost), and\nthe single Residual Network (Single Net) In order to reduce noise, we aligned the random initialisa-\ntion of all networks across experiments, by fixing the seeds for the random number generators, and\nno dataset augmentation was used, both online and offline. Results are reported in Table [I] while\nFigure[4|shows a side-by-side comparison of accuracy levels at each round of boosting for both DIB\nand BRN on the MNIST and CIFAR-100 test sets. This figure illustrates how BRNs are able to\nconsistently outperform DIB, regardless of ensemble size, and although such differences still fall\nwithin a Bernoulli confidence interval of 95%, we make the note that this does not take account of\nthe fact that all the random initialisations were aligned, so both methods started with the exact same\nnetwork.\nTable[2]shows that this is achieved without significant changes in the training timd!], The main speed\nincrease is due to the fact that the only network being trained with a full schedule is the first network.\nwhich is also the smallest, whilst all other derived networks are trained for a much shorter schedule\n(in this case only 10% of the original training schedule).\nThe initial network architectures for the first round of boosting are shown in Table Balfor MNIST,\nand Table Bb] for CIFAR-10 and CIFAR-100. It is worth mentioning that we used relatively sim-\nple network architectures that were fast to train, which still perform well on the datasets at hand,\n\nwith accuracy close to, but not comparable to, the state-of-the-art. This enabled us to test larger\nEnsembles within an acceptable training time.\nTraining used the WAME method (Mosca & Magoulas, (2016b)), which has been shown to be faster\nthan Adam and RMSprop, whilst still achieving comparable generalisation. This is thanks to a\n'In some cases BRN is actually faster than DIB, but we believe this to be just noise due to external factor:\nsuch as system load.\nWhile both Residual Networks and Densely Connected Convolutional Networks may be unfolded\ninto an equivalent ensemble, we note that there is a differentiation between an actual ensemble\nmethod and an ensemble \u201capproximation\u201d. During the creation of an ensemble, one of the principal\nfactors is the creation of diversity: each base learner is trained independently, on variations (resam-\nples in the case of boosting algorithms) of the training set, so that each classifier is guaranteed to\nlearn a different function that represents an approximation of the training data. This is the enabling\nfactor for the ensemble to perform better in aggregate.\nTable 3: Network structures used in experiments. The layers marked with \u201c*\u201d indicate the locatior\nafter which we added the residual blocks.\nspecific weight-wise learning rate acceleration factor that is determined based only on the sign of\n\nthe current and previous partial derivative Ee) For the single Residual Network, and for the\nbij\n\nnetworks in AdaBoost, we trained each member for 100 epochs. For Deep Incremental Boosting\nand Boosted Residual Networks, we trained the first round for 50 epochs, and every subsequent\nround for 10 epochs, and ran all the algorithms for 10 rounds of boosting, except for the single\nnetwork. The structure of each incremental block added to Deep Incremental Boosting and Boosted\nResidual Networks at each round is shown in Table 4a] for MNIST, and in Table [4b] for CIFAR-10\nand CIFAR-100. All layers were initialised following the reccommendations in (20150).\nDistilled Boosted Residual Network: DBRN In another set of experiments we tested the per-\nformance of a Distilled Boosted Residual Network (DBRN). Distillation has been shown to be an\neffective process for regularising large Ensembles of Convolutional Networks in[Mosca & Magoulas\n(2016\u00a2), and we have applied the same methodology to the proposed Boosted Residual Network.\nFor the distilled network structure we used the same architecture as that of the Residual Network\nfrom the final round of boosting. Accuracy results in testing are presented in Table[5] and for com-\npleteness of comparison we also report the results for the distillation of DIB, following the same\nprocedure, as DDIB.\nTable 4: Structure of blocks added at each round of DIB and BRN.\nTable 2: Training times comparison\n64 conv, 5 x 5\n\n2 x 2 max-pooling\n128 conv, 5 x 5\n\n2 x 2 max-pooling *\n\nDense, 1024 nodes\n\n50% dropout\n(a) MNIST\n\n2 x 96 conv, 3 x 3\n96 conv, 3 x 3, 2 x 2 strides\n96 conv, 3 x 3, 2 x 2 strides\n96 conv, 3 x 3, 2 x 2 strides\n2 x 2 max-pooling\n\n2 x 192 conv, 3 x 3\n192 conv, 3 x 3, 2 x 2 strides\n192 conv, 3 x 3, 2 x 2 strides\n192 conv, 3 x 3, 2 x 2 strides\n2 x 2 max-pooling *\n\n192 conv, 3 x 3\n192 conv, 1 x 1\n10 conv, 1 x 1\n\nglobal average pooling\n10-way softmax\n\n(b) CIFAR-10 and CIFAR-100\n192 conv, 3 x 3\nBatch Normalization\nReLu activation\n192 conv, 3 x 3\nBatch Normalization\n\n(a) MNIST ReLu activation\n\n(b) CIFAR-10 and CIFAR-\n100\n\n64 conv, 3 x 3\nBatch Normalization\nReLu activation\nAccuracy (%)\n\n99.5\n\n99.48\n\n99.46\n\n99.44\n\n99.42\n\n99.4\n\n99.38\n\n99.36\n\nAccuracy (%)\n\n692\n\n69\n\n688\n\n686\n\n68.4\n\n682\n\n68\n\n4 8 6\nBoosting round\n(a) MNIST\n\nBoosting round\n\n(b) CIFAR-100\nFigure 4: Round-by-round comparison of DIB vs BRN on the test set\nTable 5: Comparative results in terms of testing accuracy\nBagged Residual Networks: BARN__We experimented with substituting the boosting algorithn\nwith a simpler bagging algorithm (1996)) to evaluate whether it would be possible to onl\nuse the network from the final round of bagging as an approximation of the Ensemble. We called thi\nthe Bagged Approximate Residual Networks (BARN) method. We then also tested the performanc\nof the Distilled version of the whole Bagging Ensemble for comparison. The results are reported a\n\u2018DBARN\u201d. The results are reported in Table[\u00a7] It is clear that trying to use the last round of baggin,\nis not comparable to using the entire Bagging ensemble at test time, or deriving a new distille\nnetwork from it.\nIn this paper we have derived a new ensemble algorithm specifically tailored to Convolutional Net\nworks to generate Boosted Residual Networks. We have shown that this surpasses the performanc\u00a2\nof a single Residual Network equivalent to the one trained at the last round of boosting, of an en:\nsemble of such networks trained with AdaBoost, and Deep Incremental Boosting on the MNIST anc\nCIFAR datasets, without using augmentation techniques.\nWe then derived and looked at a distilled version of the method, and how this can serve as an\neffective way to reduce the test-time cost of running the Ensemble. We used Bagging as a proxy\nto test generating the approximate Residual Network, which, with the parameters tested, does not\nperform as well as the original Residual Network, BRN or DBRN.\nTable 6: Test accuracy for BARN.\nDBRN DDIB\n\nMNIST 99.49 % 99.44 %\nCIFAR-10 | 91.11% 90.66 %\nCIFAR-100 | 66.63% 65.91%\nFurther experimentation of the Distilled methods presented in the paper, namely DBRN and\nDBARN, is necessary to fully investigate their behaviour. This is indeed part of our work in the near\nfuture. Additionally, the Residual Networks built in our experiments were comparatively smaller\nthan those that achieve state-of-the-art performance. Reaching state-of-the-art on specific bench-\nmark datasets was not our goal, instead we intended to show that we developed a methodology that\nmakes it feasible to created ensembles of Residual Networks following a \u2018\u201cwhite-box\u201d approach to\nBRN Bagging BARN DBARN\nMNIST 99.50% 99.55% 99.29% 99.36 %\nCIFAR-10 | 90.56% 91.43% 88.47% 90.63 %\nCIFAR-100 | 69.04% 68.15% 69.42% 66.16%\nsignificantly improve the training times and accuracy levels. Nevertheless, it might be appealing it\nthe future to evaluate the performance improvements obtained when creating ensembles of larger\nstate-of-the-art, networks. Additional further investigation could also be conducted on the creatiot\nof Boosted Densely Connected Convolutional Networks, by applying the same principle to DCC)\ninstead of Residual Networks."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "L. Breiman. Bagging predictors. Machine Learning, 24(2):123\u2014-140, 1996.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing\nhuman-level performance on imagenet classification. In Proceedings of the IEEE International\nConference on Computer Vision, pp. 1026-1034, 2015b.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residua\nnetworks. arXiv preprint arXiv: 1603.05027, 2016.\nAlan Mosca and George Magoulas. Deep incremental boosting. In Christoph Benzmuller, Geof:\nSutcliffe, and Raul Rojas (eds.), GCAI 2016. 2nd Global Conference on Artificial Intelligence\nvolume 41 of EPiC Series in Computing, pp. 293-302. EasyChair, 2016a.\nAlan Mosca and George D. Magoulas. Training convolutional networks with weight-wise adaptiv\nlearning rates. In Under Review, 2016b.\nR. E. Schapire. The strength of weak learnability. Machine Learning, 5:197\u2014227, 1990."}]
HyWWpw5ex
[{"section_index": "0", "section_name": "RECURRENT COEVOLUTIONARY FEATURE\nEMBEDDING PROCESSES FOR RECOMMENDATION", "section_text": "Hanjun Dai; Yichen Wang; Rakshit Trivedi & Le Song\nRecommender systems often use latent features to explain the behaviors of users\nand capture the properties of items. As users interact with different items over\ntime, user and item features can influence each other, evolve and co-evolve over\ntime. To accurately capture the fine grained nonlinear coevolution of these features,\nwe propose a recurrent coevolutionary feature embedding process model, which\ncombines recurrent neural network (RNN) with a multi-dimensional point process\nmodel. The RNN learns a nonlinear representation of user and item embeddings\nwhich take into account mutual influence between user and item features, and\nthe feature evolution over time. We also develop an efficient stochastic gradient\nalgorithm for learning parameters. Experiments on diverse real-world datasets\ndemonstrate significant improvements in user behavior prediction compared to\nstate-of-the-arts."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "E-commerce platforms and social service websites, such as Reddit, Amazon, and Netflix, attracts\nthousands of users every second. Effectively recommending the appropriate service items to users is\na fundamentally important task for these online services. It can significantly boost the user activities\non these sites and leads to increased product purchases and advertisement clicks.\nThe interactions between users and items play a critical role in driving the evolution of user interest:\nand item features. For example, for music streaming services, a long-time fan of Rock music listen:\nto an interesting Blues one day, and starts to listen to more Blues instead of Rock music. Similarly, <\nsingle music may also serve different audiences at different times,e.g., a music initially targeted fo!\nan older generation may become popular among the young, and the features of this music need to b\u00a2\nupdated. Furthermore, as users interact with different items, users\u2019 interests and items\u2019 features car\nalso co-evolve over time, i.e., their features are intertwined and can influence each other:\ne User \u2014 item. In online discussion forums such as Reddit, although a group (item) is initially\ncreated for statistics topics, users with very different interest profiles can join this group. Hence\nthe participants can shape the features of the group through their postings. It is likely that this\ngroup can finally become one about deep learning because most users concern about deep learning\n\ne Item \u2014 user. As the group is evolving towards topics on deep learning, some users may become\nmore interested in deep learning topics, and they may participate in other specialized groups on\ndeep learning. On the opposite side, some users may gradually gain interests in pure math groups\nlose interests in statistics and become inactive in this group.\nSuch co-evolutionary nature of user-item interactions raises very important questions on how tc\nlearn them from the increasingly available data. However, existing methods either treat the tempora\nuser-item interactions data as a static graph or use epoch based methods such as tensor factorizatior\nto learn the latent features {2009} [Yang et al.|/2011). These method:\nare not able to capture the fine grained temporal dynamics of user-item interactions. Recent poin\nprocess based models treat time as a random variable and improves over the traditional methods\n\nignificantly (Du et al.| 2015} Wang et al. 2016b). However, these works make strong assumptions\n\u201cAuthors have equal contributions.\n{hanjundai ,yichen.wang, rstrivedi}@gatech .edu, lsong@cc.gatech.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Item\n2__\u00ab \u201cefeature\n\nio)\n\nInteraction\n\n\u2018hd +*feature\n\nqt)\n\nChristine User\n\n+ feature\n\nInitialize item feature\n\nbem ti, (to) = (V4 \u00abif, p\u2014eltem profile\n\nyi, (t) Evolution\na. (Co-evolution\ni,t) =o tVe - Uy, (tr) User > Item\n* +V3\u00b0 41 Context\nCp infra) 4+V4 \u00ab(ty = to) }\u20142Dritt\nInitialize user feature\nUy, (to) = o(W, - ul.) User profile\nWy - Uy, (tr) Evolution\nAlice a Co-evolution\nWa ti, (7).\nals Uy, (4) =o a ltem>User\n2k +W3- 41 \u2018Context\n+Wy (ty = to Drift\nabout the function form of the generative processes, which may not reflect the reality or accurat\nenough to capture the complex and nonlinear user-item influence in real world.\nre\n\ne Novel model. We propose a novel model that captures the nonlinear co-evolution nature of user\nand items\u2019 embeddings. It assigns an evolving feature embedding process for each user and iter\nand the co-evolution of these latent feature processes is modeled with two parallel components: (\nitem \u2014 user component, a user\u2019s latent feature is determined by the nonlinear embedding of latet\nfeatures of the items he interacted with; and (ii) user \u2014+ item component, an item\u2019s latent feature\nare also determined by the latent features of the users who interact with the item.\n\ne Technical Challenges. We use RNN to parametrize the interdependent and intertwined user an\nitem embeddings. The increased flexibility and generality further introduces technical challenge\non how to train RNN on the co-evolving graphs. The co-evolution nature of the model makes th\nsamples inter-dependent and not identically distributed, which is contrary to the assumptions i\nthe traditional setting and significantly more challenging. We are the first to propose an efficiet\nstochastic training algorithm that makes the BTPP tractable in the co-evolving graph.\n\ne Strong performance. We evaluate our method over multiple datasets, verifying that our metho\ncan lead to significant improvements in user behavior prediction compared to previous state-of-th\narts. Precise time prediction is especially novel and not possible by most prior work.\nRecent work predominantly fix the latent features assigned to each user and item (\n\nMnih}/2008}/Chen et al.}|2009}|Agarwal & Chen Ekstrand et al. | 2011}[Koren & Sill SOT\nYang et al.||2011{]Yi et al.||2014{|/Wang & Pal . In more sophisticated methods, the time\nis divided into epochs, and static latent feature trodes are applied to each spon to capture some\nemp aspects of the data {Koren} |2009|\n\nae ods, it is not clear how to choose the e engi parameter. First, different users may have | very\ndifferent timescale when they interact with those service items, making it difficult to choose a unified\nepoch length. Second, it is not easy for these methods to answer time-sensitive queries such as when\na user will return to the service item. The predictions are only in the resolution of the chosen epoch\nlength. Recently, proposed a low-rank point process based model for time-sensitive\nrecommendations from recurrent user activities. However, it fails to capture the heterogeneous\ncoevolutionary properties of user-item interactions. [Wang et al] [Wang etal 20166 [2016b) models the co-evolutionary\nproperty, but uses a simple linear representation of the users\u2019 and items\u2019 latent features, which might\nnot be expressive enough to capture the real world patterns. As demonstrated in (2016),\nFigure 1: Model illustration. (a) User-item interaction events data. Each edge stands for a tuple\nand contains the information of user, item, interaction time, and interaction feature. (b) The latent\nfeature of the user and item are updated at each event time, by a nonlinear activation function o(-)\nand contain four terms: self evolution, co-evolution, context (interaction feature), and self drift.\n[In this paper, we propose a recurrent coevolutionary feature embedding process framework. It\ncombines recurrent neural network (RNN) with point process models, and efficiently captures the\nco-evolution of user-item features. Our model can automatically find an efficient representation of\nthe underlying user and item latent feature without assuming a fixed parametric forms in advance.\nFioure!]1]}summarizes our framework. In particular. our work makes the following contributions:\nthe nonlinear RNN is quite flexible to approximate many point process models. Also we will show\nthat, our model only has O(#user + #item) regardless of RNN related parameters, and can also be\npotentially applied to online setting.\nIn the deep learning community, proposed a hierarchical Bayesian model\n\nthat jointly performs learning for the content features and collaborative filtering for the rating:\nmatrix. (Hidasi et al.|/2016) applied RNN and adopt item-to-item recommendation approach wit\nsession based data. (Tan et al.|[2016) improved this model with techniques like data augmentation\ntemporal change adaptation. (Ko et al.| (2016) proposed collaborative RNN that extends collaborative\nfiltering method to capture history of user behavior. Specifically, they used static global latent factor:\nfor items and assign separate latent factors for users that are dependent on their past history. (Song\net al.| 6) extended the deep semantic structured model to capture multi-granularity tempora\npreference of users. They use separate RNN for each temporal granularity and combine them witk\nfeed forward network which models users\u2019 and items\u2019 long term static features. However, none o!\nthese works model the coevolution of users\u2019 and items\u2019 latent features and are still extensions of epoct\nbased methods. Our work is unique since we explicitly treat time as a random variable and captures\nthe coevolution of users\u2019 and items\u2019 latent features using temporal point proc: Finally, our work\nis inspired from the recurrent marked temporal point process model ( However, this\nwork only focuses on learning a one-dimension point process. Our work is significantly differen\nsince we focus on the recommendation system setting with the novel idea of feature coevolution anc\nwe use multi-dimensional point processes to capture user-item interactions."}, {"section_index": "3", "section_name": "3. BACKGROUND ON TEMPORAL POINT PROCESSES", "section_text": "The function form of the intensity \\(t) is often designed to capture the phenomena of interests. Som\ncommonly used form includes:\ne Hawkes processes [1971] [Wang et al. wane intensity models the mutua\nexcitation between ae ie, eu = \u2014H Faden) ae \u2014t;), where k.,(t) := exp(\u2014wt\nis an exponential triggering kernel, j > 0 is a baseline ve oe. Here, the occurrence of eact\nhistorical event increases the intensity by a certain amount determined by the kernel x, and the\nweight a > 0, making the intensity history dependent and a stochastic process by itself.\n\ne Rayleigh process, whose intensity function is A(t) = at, where a > 0 is the weight parameter.\n\n4 RECURRENT CORVOLUTIONARY FEATURE EMBEDDING PROCESSES\nIn this section, we present the generative framework for modeling the temporal dynamics of user-item\ninteractions. We first use RNN to explicitly capture the co-evolving nature of users\u2019 and items\u2019 latent\nfeature. Then, based on the compatibility between the users\u2019 and items\u2019 latent feature, we model the\nuser-item interactions by a multi-dimensional temporal point process. We further parametrize the\nintensity function by the compatibility between users\u2019 and items\u2019 latent features.\nA temporal point process (Cox & Isham] 1980} |Cox & Lewis] |2006}|Aalen et al.| 2 08) is arandom\nty\n\nprocess whose realization consists of a list of discrete events localized in time, with t; \u20ac Rt.\nEquivalently, a given temporal point process can be represented as a counting process, N(t), which\nrecords the number of events before time t. An important way to characterize temporal point processes\nis via the conditional intensity function A(t), a stochastic model for the time of the next event given\nall the previous events. Formally, A(\u00a2)dt is the conditional probability of observing an event in a\nsmall window [t, t + dt) given the history #/(t) up to \u00a2 and that the event has not happen before t, i.e.,\nwhere one typically assumes that only one event can happen in a small window of size dt, i.e., dN(t) \u20ac\n{0,1}. Then, given a time t > 0, we can also characterize the conditional probability that no event\nWe associate feature embeddings w,,(t) \u20ac R* with each user u and 7;(t) \u20ac R* with each item\ni. These features represent the subtle properties which cannot be directly observed, such as the\ninterests of a user and the semantic topics of an item. Specifically, we model the drift, evolution, and\n\nco-evolution of u,,(t) and \u00e9;(t) as a piecewise constant function of time that has jumps only at event\ntimes. Specifically, we define:\nUser latent feature embedding process. For each user u, the corresponding embedding after uset\nu\u2019s k-th event e! = (it, tt, qi\u2018) can be formulated as:\nwalt) =o( Walt - ty) + Woun (ti 1)+ Wii, (ti-) + Wig\" )\nSE Ridebaie Stet \u2014 \u2014\u2014 Nea aa\n\ntemporal drift self evolution co-evolution: item feature \u2014_ interaction feature\nwhere t\u2014 means the time point just before time t, Wy, V; \u20ac R*** are the embedding matrices\nmapping from the explicit high-dimensional feature space into the low-rank latent feature space and\nWi,Vi \u20ac R*, Wo, Vo, W3, V3 \u20ac R*** are weights parameters. o(-) is the nonlinear activation\nfunction, such as commonly used Tanh or Sigmoid for RNN. For simplicity, we use basic recurrent\nneural network to formulate the recurrence, but it is also straightforward to extend it using GRU o1\nLSTM to gain more expressive power. Figure/I]summarizes the basic setting of our model.\nNext we discuss the rationale of each term in detail:\nTemporal drift. The first term is defined based on the time difference between consecutive events\nof specific user or item. It allows the basic features of users (e.g., a user\u2019s self-crafted interests)\nand items (e.g., textual categories and descriptions) to smoothly drift through time. Such changes\nof basic features normally are caused by external influences.\n\nSelf evolution. The current user feature should also be influenced by its feature at the earlier time.\nThis captures the intrinsic evolution of user/item features. For example, a user\u2019s current taste\nshould be more or less similar to his/her tastes two days ago.\n\nUser-item coevolution. Users\u2019 and items\u2019 latent features can mutually influence each other. This\nterm captures the two parallel processes. First, a user\u2019s embedding is determined by the latent\nfeatures of the items he interacted with. At each time \u00a2,, the latent item feature is 4;, (t/\u2014).\nWe capture both the temporal influence and feature of each history item as a latent embedding.\nConversely, an item\u2019s embedding is determined by the feature embedding of the user who just\ninteracts with the item.\n\nEvolution with interaction features. Users\u2019 and items\u2019 features can evolve and be influenced by\nthe characteristics of their interactions. For instance, the genre changes of movies indicate the\nchanging tastes of users. The theme of a chatting-group can be easily shifted to certain topics of\nthe involved discussions. In consequence, this term captures the influence of the current interaction\nfeatures to the changes of the latent user (item) features.\n\nInteraction feature. This is the additional information happened in the user-item interactions. For\nexample, in online discussion forums such as Reddit, the interaction features are the posts and\ncomments. In online review sites such as Yelp, it is the reviews of the businesses.\nsimply the embedding of static user/item features such as user\u2019s profile and item\u2019s categorical features.\nFor notation simplicity, we define O% = {ev = (13, t7, qi )} as the ordered listed of all events related\n\nto user u, and O' = {e}, = (u',, ti, q\u2019,)} as the ordered list of all events related to item i. We also set\nti = t\u201d = 0 for all the users and items. t;,\u2014 denotes the time point just before time t,.\neY~Y\u2014\u2014S eS \u2014\u2014\n\ntemporal drift self evolution co-evolution: item feature interaction feature\nHere both the user and item\u2019s feature embedding processes are piecewise constant functions of time\nand only updated if an interaction event happens. A user\u2019s attribute changes only when he has a new\ninteraction with some item. For example, a user\u2019s taste for music changes only when he listens to\nsome new or old musics. Also, an item\u2019s attribute changes only when some user interacts with it.\nDifferent from (2013) who also models the time change with piecewise constant function,\nbut their work has no coevolve modeling, and is not capable of predicting the future time point.\nTo summarize, each feature embedding process evolves according to the respective base temporal\nuser (item) features and also are mutually dependent on each other due to the endogenous influences\nfrom the interaction features and the entangled latent features.\nwhere t > t\u2019, and \u00a2\u2019 is the last time point where either user u\u2019s embedding or item i\u2019s embedding\nchanges before time t. The rationale behind this formulation is three-fold:\ne sie Clatin ea asa eee pols Ai Preeti Bhar Instead of Sors|iCams lant a the pals into ike Tek as Mk 00 Wan\n\n, we ee ee model the timing at ee interaction oe asa nn Te Which\nnaturally captures the heterogeneity of the temporal interactions between users and items.\n\ne Short term preference. The probability for user u to interact with item 7 depends on the compatibility\nof their instantaneous embeddings, which is evaluated through the inner product at the last event\ntime t\u2019. Because u,,(t) and 7;(t) co-evolve through time, their inner-product measures a general\nrepresentation of the cumulative influence from the past interactions to the occurrence of the\ncurrent event. The exp(-) function ensures the intensity is positive and well defined.\n\ne Rayleigh time distribution. The user and item embeddings are piecewise constant, and we use the\ntime lapse term to make the intensity piecewise linear. This form leads to a Rayleigh distribution\nfor the time intervals between consecutive events in each dimension. It is well-adapted to modeling\nfads, where the event-happening likelihood f(-) in (1) rises to a peak and then drops extremely\nrapidly. Furthermore, it is computationally easy to obtain an analytic form of f(-). One can then\nuse f(-) to make item recommendation by finding the dimension that f(-) reaches the peak.\nu=1i=1\n\nN mon T\n\u20ac=\u2014Y log (A%4 (ts |t4)) + [ NM (r|r')d\n\u00bb og (X(t) + SON fA (rr')ar\nIn this section, we propose an efficient algorithm to learn the parameters {Vi} and {Wi};_4- The\nbatch objective function is presented in 5). The Back Propagation Through Time (BPTT) is the\nstandard way to train a RNN. To make the back propagation tractable, one typically needs to do\ntruncation during training. However, due to the novel co-evolutionary nature of our model, all the\nevents are related to each other by the user-item bipartite graph (Figure[2p, which makes it hard to\ndecompose.\nHence, in sharp contrast to works (Hidasi et al-||2016} [Du et al.|{2016) in sequential data where one\n\ncan easily break the sequences into multiple segments to make the BPTT trackable, it is a challenging\ntask to design BPTT in our case. To efficiently solve this problem, we first order all the events\nglobally and then do mini-batch training in a sliding window fashion. Each time when conducting\nfeed forward and back propagation, we take the consecutive events within current sliding window to\nbuild the computational graph. Thus in our case the truncation is on the global timeline, instead over\nindividual independent sequences as in prior works.\nNext, we explain our procedure in detail. Given a mini-batch of M ordered events O = fej}f41. we\nset the time span to be [Tp = t1, I\u2019 = tz]. Below we show how to compute the intensity and survival\nprobability term in the objective function (5) respectively.\nrN (EIt\u2019) = exp (wu (t\u2019) '\u00e9:(t\u2019)) -(t _ t')\n\nuser-item compatibility time lapse\nWith the parameterized intensity function, we can further estimate the parameters using maximum\nikelihood estimation of all events. The joint negative log-likelihood is (Daley & Vere-Jones||2007):\nThe rationale of the objective two-fold: (i) the negative intensity summation term ensures the\nprobability of all interaction events is maximized; (ii) the second survival probability term penalizes\nthe non-presence of an interaction between all possible user-item pairs on the observation window.\nHence, our framework not only explains why an event happens, but also why an event did not happen.\n9:45am 10:15am 1:30pm. 2:45pm\n\nmm 96 28 28\nJacob Sophi Jacob.\n\n(a) Graph of embedding computation\n\n(user, forum)\n\n8e\nSe\n\nSophie @\n\nSophie\n\n1:45pm 3:45pm 5:00pr\n\n(b) Dependency between events\n1:45pm. 3:45pm 5:C\n\n8e\nBe\n\nSophie @\nSonhie re)\n\u2018igure 2: Intensity computation. (a) Each arrow means the flow of feature embedding compute\nion, e.g., Jacob interacts with basketball at 10:15am. Then the embeddings are updated: his featur\nit 10:15 am is influenced by his feature and the basketball feature at 9:45am (arrow | and 2); th\npasketball\u2019s feature is influenced by Jacob\u2019s feature and its feature (arrow 3 and 4). (b) The event\nlependency for two users and two forums (items). It shows how event at one dimension influenc:\nyther dimensions. Each orange arrow represents the dependency within each dimension, and th\nslack arrow denotes the cross-dimension dependency, e.g., Sophie interacts with volleyball at 2:30pm\nind this event changes the volleyball embedding, thus will affect Jacob\u2019s visit at 3:30pm.\n<a 2\n\n8 8 ,\n\n\u00b0 Qin toda) Carle, tarda) A Maly ta da)\n\nes\n\n1 dimension of\n\n\u201cser embedding\n\n1 dimension of\nitem embedding\n\n(a) Piecewise constant embedding visualization\n\n~ | BE Odt\no -[ 2B @de=\n\nre\nCT) peaweaTuce)\n\n(b) Survival probability computation\nFigure 3: Survival probability computation. (a) A user or item\u2019s feature embedding is piecewise\nconstant and will change only after an interaction event happens. Only one dimension of the feature\nembedding is shown. (b) Survival probability for a user-item pair (u, i). The integral fe NOT |r )dr\nis decomposed into 4 inter-event intervals separated by {to,--- ,t3}, with close form on each interval.\nComputing the intensity function. Each time when a new event e; happens between u; and 7;\ntheir corresponding feature embeddings will evolve according to a computational graph, as illustrated\nin Figure Due to the change of feature embedding, all the dimensions related to u; or i; will\nbe influenced and the intensity function for that dimension will change consequently. Such cross\ndimension influence dependency is shown in Figure In our implementation, we first compute the\ncorresponding intensity A\" (t; It) according to and then update the embedding of u,; and i;\n\nThis operation takes O(/) complexity. and is independent to the number of users or items.\nComputing the survival function. To compute the survival probability \u2014 Jing X\u201c\"(r|7!)dr for each\npair (u, i), we first collect all the time stamps {t;, } that have events related to either u or i. For notation\nsimplicity, let |{t;,}] = nu; and t; = To, tn, = T\u2019. Since the embeddings are piecewise constant,\nthe corresponding intensity function is piecewise linear, according to Thus, the integration is\ndecomposed into each time interval where the intensity is constant, i.e.,\ntht. .\n| Ml (r|r dr =\nth\n\nNui-l\n\nSs (ha \u2014 t 2) exp (tu ( th)\n\nk=1\n\nvai(th)\nFigure[3] isualizes the computation. Although the survival probability term exists in close form, we\nstill need to solve two challenges. First, it is still expensive to compute it for each user item pair\nMoreover, since the user-item interaction bipartite graph is very sparse, it is not necessary to monitor\neach dimension in the stochastic training setting. To speed up the computation, we propose a novel\nrandom-sampling scheme as follows.\nNote that the intensity term in the objective function (5) tries to maximize the inner product between\nuser and item that has interaction event, while the survival term penalize over all other pairs of inner\nTable 1: Comparison with different methods.\nproducts. We observe that this is similar to Softmax computing for classification problem. Hence\ninspired by the noise-contrastive estimation method (Gutmann & Hyvirinen! 2012) that is widely\n\nused in language models (Mnih & Kavukcuoglu\\|2013), we keep the dimensions that have events on\n\nthem, while randomly sample dimensions without events in current mini-batch.\nThe second challenge lies in the fact that the user-item interactions vary a lot across mini-batches,\nhence the corresponding computational graph also changes greatly. To make the learning efficient, we\nuse the graph embedding framework (Dai et al.|/2016) which allows training deep learning models\n\nwhere each term in the objective has a different computational graphs but with shared parameters\nThe Adam Optimizer (Kingma & Ba\\|2014) together with gradient clip is used in our experiment."}, {"section_index": "4", "section_name": "6 EXPERIMENTS", "section_text": "We evaluate our model on real-world datasets. For each sequence of user activities, we use all the\nevents up to time T - p as the training data, and the rest events as the testing data, where T is the\nobservation window. We tune the latent rank of other baselines using 5-fold cross validation with\n\ngrid search. We vary the proportion p \u20ac {0.7, 0.72, 0.74, 0.76, 0.78} and report the averaged results\nover five runs on two tasks (we will release code and data once published):\nLowRankHawkes (Du et al.||2015): This is a low rank Hawkes process model which assumes\nuser-item interactions to be independent of each other and does not capture the co-evolution of\nuser and item features.\n\nCoevolving (Wang et al.|/2016b): This is a multi-dimensional point process model which uses a\nsimple linear embedding to model the co-evolution of user and item features.\n\nPoissonTensor (Chi & Kolda |2012): Poisson Tensor Factorization has been shown to perform\nbetter than factorization methods based on squared loss (Karatzoglou et al.| 2010} [Xiong et al.|\non recommendation tasks. The performance for this baseline is reported\ning the average of the parameters fitted over all time intervals.\n\nTimeSVD++ and FIP (Yang et al.||2011): These two methods are only designed\nfor explicit ratings, the implicit user feedbacks (in the form of a series of interaction events) are\nconverted into the explicit ratings by the respective frequency of interactions with users.\n\nSTIC (Kapoor et al.|/2015): it fits a semi-hidden markov model (HMM) to each observed user-item\n\npair and is only designed for time prediction.\nWe use three real world datasets as follows.\ne IPTV. It contains 7,100 users\u2019 watching history of 385 TV programs in 11 months (Jan 1 - Nov 30\n2012), with around 2M events, and 1,420 movie features (including 1,073 actors, 312 directors, 22\ngenres, 8 countries and 5 years).\n\ne Yelp. This data was available in Yelp Dataset challenge Round 7. It contains reviews for various\nbusinesses from October, 2004 to December, 2015. The dataset we used here contains 1,005 users\nand 47,924 businesses, with totally 291,716 reviews.\nMethod [DeepCoevolve LowRankHawkes Coevolving PoissonTensor TimeS VD++ FIP STIC\n\nContinuous time\n\nPredict Item\n\nPredict Time\nComputation RNN Factorization Factorization Factorization Factorization Factorization HMM\ne Item prediction. At each test time t, we predict the item that the user w will interact with. We rank\nall the items in the descending order of the conditional density f(t) = \\\u201c*(t)S\u201c*(t). We report\nthe Mean Average Rank (MAR) of each test item at the test time. Ideally, the item associated with\nthe test time t should rank one, hence smaller value indicates better predictive performance.\n\ne Time prediction. We predict the expected time when a testing event will occur between a given\n\nuser-item pair. Using Rayleigh distribution, it is given by E,~ pu,i()(t) = | Sexptat=y tS) *\n\nWe report the Mean Absolute Error (MAE) between the predicted and true time.\nItem prediction\n\nTime prediction\n\nDeepCosvoNe an [DeepCoevolve [DeepCoevolve\n100 }iicoevolving ICoevolving |LowRankHawkes,\nTimeSVD++ 100: TimeSVD++ 1000) TimeSvD++\n\u00ab \u00ab\nS 3\n\u20180\n= im\nMethods Methods Methods\nPiestoevoe Picsscccvone :\n{Coevolving {Coevolving heonided\n2\nMethods Methods Methods\n\n(a) IPTV\n\n(b) Reddit\n\n(c) Yelp\n. Reddit. We collected discussion related data on different subreddits (groups) for the month c\nJanuary 2014. We filtered all bot users\u2019 and their posts from this dataset. Furthermore, we random]\nselected 1,000 users, 1,403 groups, and 10,000 discussion events.\nFigure [4] shows that DEEPCOEVOLVE significantly outperforms both epoch-based baselines anc\nstate-of-arts point process based methods. LOWRANKHAWKES has good performance on iten\nprediction but not on time prediction, while COEVOLVING has good performance on time predictiot\nbut not on item prediction. We discuss the performance regarding the two metrics below.\n[tem prediction. Note that the best possible MAR one can achieve is 1, and our method gets quite\naccurate results: with the value of 1.7 on IPTV and 1.9 on Reddit. Note LOWRANKHAWKES achieve:\ncomparable item prediction performance, but not as good on the time prediction task. We think the\nreason is as follows. Since one only need the rank of conditional density f(-) in (I) to conduct item\nprediction, LOWRANKHAWKES may still be good at differentiating the conditional density function\nput could not learn its actual value accurately, as shown in the time prediction task where the value o!\nthe conditional density function is needed for precise prediction.\nTime prediction. The second row of Figure[4]shows that DEEPCOEVOLVE outperforms other meth\n\nods. Compared with LOWRANKHAWKES\n6x improvement on Reddit, it has 10x im\n\nthat achieves comparable time predication performance\nprovement on Yelp, and 30x improvement on IPTV. The\n\ntime unit is hour. Hence it has 2 weeks accuracy improvement on IPTV and 2 days on Reddit. This is\nimportant for online merchants to make time sensitive recommendations. An intuitive explanatior\nis that our method accurately captures the nonlinear pattern between user and item interactions\nThe competitor LOWRANKHAWKES assumes specific parametric forms of the user-item interactior\nprocess, hence may not be accurate or expressive enough to capture real world temporal patterns\nFurthermore, it models each user-item interaction dimension independently, which may lose the\nimportant affection from user\u2019s interaction with other items while predicting the current item\u2019s\nreoccurrence time. Our work also outperforms COEVOLVING, e.g., with around 3x MAE improve or\n\nIPTV. Moreover, the item prediction perfo\n\nrmance is also much better than COEVOLVING. It shows\n\nthe importance of using RNN to capture the nonlinear embedding of user and item latent features\n\ninstead of the simple parametrized linear e:\n\nmbedding in COEVOLVING."}, {"section_index": "5", "section_name": "6.4 INSIGHT OF RESULTS", "section_text": "We will look deeper and provide rationale behind the prediction results in the following two sub-\nsections. First, to understand the difficulty of conducting prediction tasks in each dataset, we study\ntheir different sparsity properties. For the multidimensional point process models, the fewer events\nwe observe in each dimension, the more sparse the dataset is. Our approach alleviates the sparsity\nproblem via the modeling of dependencies among dimensions, thus is consistently doing better than\nother baseline algorithms.\nNext, we fix one dataset and evaluate how different levels of sparsity in training data influences each\nalgorithm\u2019s performance.\nFigure 4: Prediction results on three real world datasets.\n# events distribution\n\nInteraction Graph\n\nos ev on Redlt os Yelp\nozs 04\n5\u00b0 Bou Bos\nBors 3 3\n5 Bos 502\nBos H 5\n- Soe =\noa\n\u00b0\n\u00b0o 200 \u201c400 \u2018600 \u2018B00 \"1000 \u00b0o 200 \u201c400 \u201c600 \"800 \"7000 0 200 400 600 800 1000\n\u2018ovens por user {events por ser # events per user\n~\n~\n<\n=\n-\n\n(a) IPTV, 385 items (b) Reddit, 1,403 groups (c) Yelp, 47,924 businesses\nFigure 5: Visualization of the sparsity property in each dataset. The first row shows the distribution\nof number of events per user. The second row shows the user-item interaction graph. It is generated\nas follows. For each dataset, we randomly pick 10 users with 100 history events each user and collect\nall items they have interacted with. The interaction graph itself is a bipartite graph, and we put users\non left side, and items on the right side.\nSparsity in terms of the number of events per user. Typically, the more user history data we have\nthe better results we will obtain in the prediction tasks. We can see in IPTV dataset, users typically\nhave longer length of history than the users in Reddit and Yelp datasets. Thus our algorithm and al\nother baseline methods have their best performance on this dataset. However, the Reddit dataset anc\nYelp dataset are hard to tell the performance based only on the distribution of history length, thus we\ndo a more detailed visualization.\nSparsity in terms of diversity of items to recommend. From the bipartite graph, it is easy to see\nthat Yelp dataset has higher density than the other two datasets. The density of the interaction graph\nreflects the variety of history per each user. For example, the users in IPTV only has 385 programs to\nwatch, but they can have 47,924 businesses to choose in Yelp dataset. Also, the Yelp dataset has 9\ntimes more items than IPTV and Reddit dataset in the bipartite graph. This means the users in Yelp\ndataset has more diverse tastes than users in other two datasets. This is because if users has similar\ntastes, the distinct number of items in the union of their history should be small.\nBased on the above two facts, we can see Yelp dataset is the most sparse, since it has shorter length\nof history per user, and much more diversity of the items, it is not surprising that this dataset is much\nharder than the other IPTV and Reddit dataset."}, {"section_index": "6", "section_name": "6.4.2. ROBUSTNESS OF THE ALGORITHM", "section_text": "With the case study on the most challenging Yelp dataset, we further evaluate how each algorithm\nperforms with lower level of sparsity as compared to the one used in Figure (A](c).We use this to\ndemonstrate that our work is most robust and performs well across different levels of sparsity.\nOn this dense dataset, Figure [6] (b) and (c) show that all the algorithms\u2019 performances improve\nwith more history events, comparing to the performance in original Yelp dataset. For example\nLOWRANKHAWKES has similar rank prediction results as our DEEPCOEVOLVE on this dense dataset\nHowever, as the dataset becomes sparse, the performance of LOWRANKHAWKES drops significantly\nas shown in Figure fate). For example, the rank prediction error goes from 90 to 2128, and the\nWe first create Yelp100, a more dense dataset, by filtering the original Yelp dataset to keep the top\n100 users. Each user would have at least 200 events. Figure[6](a) shows the statistics of this dataset.\nOn average the users have more history events than the original Yelp dataset in Figure[5{c).\ntime error goes from 724 to 11043.5. We think it is because this model relies more on the history\ninformation per each user-item pair.\nOn the contrary, our DEEPCOEVOLVE still has superior performance with such high level of sparsity\nThe rank error only changes from 87 to 107, and the time error changes from 72 to 884 as the datz\nbecomes sparse. It shows that our work is the most robust to the sparsity in the data. We think it is\nbecause our work accurately captures the nonlinear multidimensional dependencies between users\nand items latent features."}, {"section_index": "7", "section_name": "7 CONCLUSION", "section_text": "We have proposed an efficient framework to model the nonlinear co-evolution nature of users\u2019 an\u00a2\nitems\u2019 latent features. Moreover, the user and item\u2019s evolving and co-evolving processes are captured\nby the RNN. It is based on temporal point processes and models time as a random variable. Hence\nit is in sharp contrast to prior epoch based works. We demonstrate the superior performance of out\nmethod on both the time and item prediction task, which is not possible by most prior work. Future\nwork includes extending to other social applications, such as group dynamics in message services.\nStatistics of Yelp100\n\n\u2018rection of user\n\nYeip100\n\n# events por user\n\n(a) # events distribution\n\n[DeepCoovolve\nLowRanktawkes\n(Coevohing\nPoicsonTensor\nTimeSVD+=\n\nFP\n\n1000\n\n1000\n\nMethods\n\n(b) MAR\n\nMethods\n\n(c) MAE\nFigure 6: Comparison of performance with different amount of history.\nD.R. Cox and V. Isham. Point processes, volume 12. Chapman & Hall/CRC, 1980.\nHanjun Dai, Bo Dai, and Le Song. Discriminative embeddings of latent variable models for structured\ndata. In JCML, 2016.\nNan Du, Yichen Wang, Niao He, and Le Song. Time sensitive recommendation from recurrent user\nactivities. In NJPS, 2015.\nPrem Gopalan, Jake M Hofman, and David M Blei. Scalable recommendation with hierarchica\npoisson factorization. UAI, 2015.\nAlan G Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 5!\n(1):83-90, 1971.\nBalazs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. Session-basec\nrecommendations with recurrent neural networks. In JCLR, 2016.\nKomal Kapoor, Karthik Subbian, Jaideep Srivastava, and Paul Schrater. Just in time recommendations\nModeling the dynamics of boredom in activity streams. In WSDM, 2015.\nAlexandros Karatzoglou, Xavier Amatriain, Linas Baltrunas, and Nuria Oliver. Multiverse recom-\nmendation: n-dimensional tensor factorization for context-aware collaborative filtering. In Recsys,\n2010.\nOdd Aalen, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point\nof view. Springer, 2008.\n\nD. Agarwal and B.-C. Chen. Regression-based latent factor models. In J.F. Elder, F. Fogelman-Souli\u00e9,\nPA. Flach, and M.J. Zaki (eds.), KDD, 2009.\nTianqi Chen, Hang Li, Qiang Yang, and Yong Yu. General functional matrix factorization using gra-\ndient boosting. In Proceeding of 30th International Conference on Machine Learning (ICML\u201913),\nvolume 1, pp. 436-444, 2013.\n\nY. Chen, D. Pavlov, and J.F. Canny. Large-scale behavioral targeting. In J.F. Elder, F. Fogelman-Souli\u00e9,\nPA. Flach, and M. J. Zaki (eds.), KDD, 2009.\nEric C Chi and Tamara G Kolda. On tensors, sparsity, and nonnegative factorizations. SIAM Journal\non Matrix Analysis and Applications, 33(4):1272\u20141299, 2012.\nD.J. Daley and D. Vere-Jones. An introduction to the theory of point processes: volume II: general\ntheory and structure, volume 2. Springer, 2007.\nNan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song.\nRecurrent marked temporal point processes: Embedding event history to vector. In KDD, 2016.\nMichael D Ekstrand, John T Riedl, and Joseph A Konstan. Collaborative filtering recommender\neuctame) Enyundatiane and Teende in Unman.Cramnirter Intorartinn A(9\\-21..172 9011\nY. Koren. Collaborative filtering with temporal dynamics. In KDD, 2009.\nHao Wang, Naiyan Wang, and Dit-Yan Yeung. Collaborative deep learning for recommender systems\nIn KDD. ACM, 2015a.\nYichen Wang, Bo Xie, Nan Du, and Le Song. Isotonic hawkes processes. In JCML, 2016c.\nLiang Xiong, Xi Chen, Tzu-Kuo Huang, Jeff G. Schneider, and Jaime G. Carbonell. Tempora\ncollaborative filtering with bayesian probabilistic tensor factorization. In SDM, 2010.\nShuang-Hong Yang, Bo Long, Alex Smola, Narayanan Sadagopan, Zhaohui Zheng, and Hongyuan\nZha. Like like alike: joint friendship and interest propagation in social networks. In WWW, 2011\nXing Yi, Liangjie Hong, Erheng Zhong, Nanthan Nan Liu, and Suju Rajan. Beyond clicks: Dwel\ntime for personalization. In RecSys, 2014.\nYoung-Jun Ko, Lucas Maystre, and Matthias Grossglauser. Collaborative recurrent neural networks\nfor dynamic recommender systems. Journal of Machine Learning Research, pp. 1-16, 2016.\nYehuda Koren and Joe Sill. Ordrec: an ordinal model for predicting personalized item rating\ndistributions. In RecSys, 2011.\nAndriy Mnih and Koray Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive\nJiayu Zhou Juhan Lee Preeti Bhargava, Thomas Phan. Who, what, when, and where: Multi-\n\ndimensional collaborative recommendations using tensor factorization on sparse user-generated\ndata. In WWW, 2015.\n\nR. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using markov chain monte\ncarlo. In ICML, 2008.\n\nYang Song, Ali Mamdouh Elkahky, and Xiaodong He. Multi-rate deep learning for temporal\n\nrecommendation. In Proceedings of the 39th International ACM SIGIR Conference on Research\nand Development in Information Retrieval. nn. 909-912. 2016.\nYong K Tan, Xinxing Xu, and Yong Liu. Improved recurrent neural networks for session-based\nrecommendations. arXiv: 1606.08117v2. 2016.\nYichen Wang and Aditya Pal. Detecting emotions in social media: A constrained optimizatiot\napproach. In LJCAI, 2015.\nYichen Wang, Nan Du, Rakshit Trivedi, and Le Song. Coevolutionary latent feature processes for\ncontinuous-time user-item interactions. In NJPS, 2016b."}, {"section_index": "8", "section_name": ". DETAILS ON GRADIENT COMPUTATION", "section_text": "Computing gradient. For illustration purpose, we here use Sigmoid as the nonlinear activatior\nfunction a. In order to get gradient with respect to parameter W\u2019s, we first compute gradients witl\nrespect to each varying points of embeddings. For user u\u2019s embedding after his k-th event, the\ncorresponding partial derivatives are computed by:\nn \u201ck+l ust ( /\nae 2D Spat? Nr |r dr ae\nFun) ~ SHE tant Seal) OO tat) \u00a9 tlt)\nfrom intensity FE\nFae (t,,)) tn (tt\n+ oi, fa o ( diy ( kai)) \u00a9 tiv, (tea)\nwhere \u00a9 denotes element-wise multiplication.\nThe gradient coming from the second term (i.e., the survival term) is also easy to compute, since the\nRayleigh distribution has closed form of survival function. For a certain item i, if its feature doesn\u2019\nchanged between time interval [t,t ,], then we have\na fie NT /7 dr (te _ tH)?\n\nk\n\nOu, (t) 2\n\nexp (u(t) a (te)\u00e9:(\u00a2))\nOn the other hand, if the embedding of item i changes during this time interval, then we should break\nthis interval into segments and compute the summation of gradients in each segment in a way similar\nto (7. Thus, we are able to compute the gradients with respect to W;.i \u20ac {1.2.3.4} as follows.\n\u2122 = Ju.(th) \u00a9 (4 \u2014 wa(th)) \u00a9 Uulth) (te \u2014 tha)\n\nm\n\nBs = (sarc = ltt) \u00a9 malt) malt)\n\nm\n\nBs = (saggy \u00a9 Malt) Omal)) BEIT\n\nm\n\niW, Cae D (i \u2014 uy (tk) \u00a9 tu (ti ))\n\nUsik\n\nUK\nSince the items are treated symmetrically as users, the corresponding derivatives can be obtained in a\nsimilar way."}]
BJrFC6ceg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "The PixelCNN, introduced by|van den Oord et al.|(2016b), is a generative model of images with <\n\ntractable likelihood. The model fully factorizes the probability density function on an image x ove!\nall its sub-pixels (color channels in a pixel) as p(x) = [], p(a;|2<;). The conditional distributions\np(\u00ab;|v<;) are parameterized by convolutional neural networks and all share parameters. The Pixel.\nCNN is a powerful model as the functional form of these conditionals is very flexible. In additior\nit is computationally efficient as all conditionals can be evaluated in parallel on a GPU for an ob-\nserved image x. Thanks to these properties, the PixelCNN represents the current state-of-the-art ir\ngenerative modeling when evaluated in terms of log-likelihood. Besides being used for modeling\n\nimages, the PixelCNN model was recently extended to model audio (van den Oord et al. 201 6a)\nvideo (Kalchbrenner et al.|{2016b) and text (Kalchbrenner et al.|/2016a)."}, {"section_index": "1", "section_name": "2 MODIFICATIONS TO PIXELCNN", "section_text": "We now describe the most important modifications we have made to the PixelCNN model archite-\ncure as described by (2016c). For complete details see our code release at\n\nhtettnesr/laithih cam/nnena\nThe standard PixelCNN model specifies the conditional distribution of a sub-pixel, or color channel\nof a pixel, as a full 256-way softmax. This gives the model a lot of flexibility, but it is also very costly\nin terms of memory. Moreover, it can make the gradients with respect to the network parameters"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "For use in our research, we developed our own internal implementation of PixelCNN and made a\nnumber of modifications to the base model to simplify its structure and improve its performance.\nWe now release our implementation at/ht tps : //github.com/openai/pixel-\u2014cnn\\ hoping\nthat it will be useful to the broader community. Our modifications are discussed in Section [2] and\nevaluated experimentally in Section B] State-of-the-art log-likelihood results confirm their useful-\nness.\nvery sparse, especially early in training. With the standard parameterization, the model does not\nknow that a value of 128 is close to a value of 127 or 129, and this relationship first has to be learned\nbefore the model can move on to higher level structures. In the extreme case where a particulai\nsub-pixel value is never observed, the model will learn to assign it zero probability. This would be\nespecially problematic for data with higher accuracy on the observed pixels than the usual 8 bits: In\nthe extreme case where very high precision values are observed, the PixelCNN, in its current form.\nwould require a prohibitive amount of memory and computation, while learning very slowly. We\ntherefore propose a different mechanism for computing the conditional probability of the observed\ndiscretized pixel values. In our model, like in the VAE of|[Kingma et al.|(2016), we assume there is\na latent color intensity 7 with a continuous distribution, which is then rounded to its nearest 8-bit\nrepresentation to give the observed sub-pixel value x. By choosing a simple continuous distributior\nfor modeling v (like the logistic distribution as done by/Kingma et al.|(2016)) we obtain a smooth and\nmemory efficient predictive distribution for x. Here, we take this continuous univariate distributior\nto be a mixture of logistic distributions which allows us to easily calculate the probability on the\nobserved discretized value x, as shown in equation Q). For all sub-pixel values x excepting the edge\ncases 0 and 255 we have:\nEs\nvw > milogistic(s:, 83)\n\ni=l\n\nK\nP(x|r.n,s) = Som [o((w + 0.5 \u2014 pa)/si) \u2014 o((e \u2014 0.5 ~ p:)/si)]\n\ni=l\nOur approach follows earlier work using continuous mixture models (Domke et al.| 2 38} Theis\net al ] 2012} Uria et al | 2013} [Theis & Bethge| 2015). but avoids allocating probability mass to\nvalues outside the valid range of [0,255] by explicitly modeling the rounding of v to x. In addi-\ntion, we naturally assign higher robability to the edge values 0 and 255 than to their neighboring\nvalues, which corresponds well with the observed data distribution as shown in Figure[I] Experi-\nmentally, we find that only a relatively small number of mixture components, say 5, is needed to\naccurately model the conditional distributions of the pixels. The output of our network is thus of\nmuch lower dimension, yielding much denser gradients of the loss with respect to our parameters. In\nour experiments this greatly sped up convergence during optimization, especially early on in train-\ning. However, due to the other changes in our architecture compared to that of {van den Oord et al.\n(2016c) we cannot say with certainty that this would also apply to the original PixelCNN model.\n0.010\n\n0.008\n\n0.006\n\nfrequency\n\n0.004\n\n0.002\n\n0.000\n0\n\n50 100 150 200 25\n\n0\nFigure 1: Marginal distribution of all sub-pixel values in CIFAR-10. The edge value of 255 i:\nmuch more frequent than its neighbouring values: This is easy to model using our rounding basec\napproach, but harder using continuous or truncated distributions.\nwhere o() is the logistic sigmoid function. For the edge case of 0, replace x \u2014 0.5 by \u2014oo, and for\n255 replace x + 0.5 by +00. Our provided code contains a numerically stable implementation for\ncalculating the log of the probability in equation|2]\nThe pixels in a color image consist of three real numbers, giving the intensities of the red, blue and\ngreen colors. The original PixelCNN factorizes the generative model over these 3 sub-pixels. This\nallows for very general dependency structure, but it also complicates the model: besides keeping\ntrack of the spatial location of feature maps, we now have to separate out all feature maps in 3\ngroups depending on whether or not they can see the R/G/B sub-pixel of the current location. This\nadded complexity seems to be unnecessary as the dependencies between the color channels of a pixel\nare likely to be relatively simple and do not require a deep network to model. Therefore, we instead\ncondition only on whole pixels up and to the left in an image, and output joint predictive distributions\nover all 3 channels of a predicted pixel. The predictive distribution on a pixel itself can be interpreted\nas a simple factorized model: We first predict the red channel using a discretized mixture of logistics\nas described in section Next, we predict the green channel using a predictive distribution of the\nsame form. Here we allow the means of the mixture components to linearly depend on the value of\nthe red sub-pixel. Finally, we model the blue channel in the same way, where we again only allow\nlinear dependency on the red and green channels. For the pixel (1;,;, 9i,;, i,j) at location (i, j) in\nour image, the distribution conditional on the context C;,;, consisting of the mixture indicator and\nthe previous pixels, is thus\nP(g G5 bi,g|Cig) = P(rig|Mr(Cig), 8r(Cig)) X P(Gig Mg (Cig. 7,3), 8g(Cig)\nx P(bi,j|Mo (Ci, Ti,5; 94,5), S0(Ci,3))\nMg(Cigs ig) = Mg (Cig) + (Ci )ri,3\nLu (Cig sings 9i,9) My(Cig) + B(Cig rig + V(Ci,g )di,5;\nwith a, 3, scalar coefficients depending on the mixture component and previous pixels.\nThe mixture indicator is shared across all 3 channels; i.e. our generative model first samples a mix-\nture indicator for a pixel, and then samples the color channels one-by-one from the corresponding\nmixture component. Had we used a discretized mixture of univariate Gaussians for the sub-pixels\ninstead of logistics, this would have been exactly equivalent to predicting the complete pixel using\na (discretized) mixture of 3-dimensional Gaussians with full covariance. The logistic and Gaus.\nsian distributions are very similar, so this is indeed very close to what we end up doing. For ful\nimplementation details we refer to our code at/https: //github.com/openai/pixel\u2014cnn\nThe original PixelCNN only uses convolutions with small receptive field. Such convolutions are\ngood at capturing local dependencies, but not necessarily at modeling long range structure. Al-\nthough we find that capturing these short range dependencies is often enough for obtaining very\ngood log-likelihood scores (see Table 2). explicitly encouraging the model to capture long range\ndependencies can improve the perceptual quality of generated images (compare Figure B]and Fig-\nure|5). One way of allowing the network to model structure at multiple resolutions is to introduce\ndilated convolutions into the model, as proposed by|van den Oord et al. (2016a) and |Kalchbren-\n. Here, we instead propose to use downsampling by using convolutions of stride\n2. Downsampling accomplishes the same multi-resolution processing afforded by dilated convo-\nlutions, but at a reduced computational cost: where dilated convolutions operate on input of ever\nincreasing size (due to zero padding), downsampling reduces the input size by a factor of 4 (for\nstride of 2 in 2 dimensions) at every downsampling. The downside of using downsampling is that\nit loses information, but we can compensate for this by introducing additional short-cut connections\ninto the network as explained in the next section. With these additional short-cut connections, we\nfound the performance of downsampling to be the same as for dilated convolution."}, {"section_index": "3", "section_name": "2.4 ADDING SHORT-CUT CONNECTIONS", "section_text": "For input of size 32 x 32 our suggested model consists of 6 blocks of 5 ResNet layers. In betweer\nthe first and second block, as well as the second and third block, we perform subsampling by stridec\nconvolution. In between the fourth and fifth block, as well as the fifth and sixth block, we perform\nupsampling by transposed strided convolution. This subsampling and upsampling process lose:\ninformation, and we therefore introduce additional short-cut connections into the model to recover!\nthis information from lower layers in the model. The short-cut connections run from the ResNet\nlayers in the first block to the corresponding layers in the sixth block, and similarly between blocks\ntwo and five, and blocks three and four. This structure resembles the VAE model with top dowr\n\ninference used by (2016), as well as the U-net used by/Ronneberger et al.|(2015) for\non. Figure]2/s!\n\nimage segmentati ows our model structure graphically.\n32x32 16x16 8x8 8x8 16x16 32x32 Ez | = Sequence of 6\nlayers\n\n= Downward stream\n\nt = Downward and\n\nrightward stream\n\n> = Identity (skip)\nconnection\n\n\u2014> = Convolutional\nconnection\n32x32 16x16 8x8 8x8 16x16 32x32\n\n= Downward stream\n\nt = Downward and\nrightward stream\n\n> = Identity (skip)\nconnection\n\nle!\n\n\u2014> = Convolutional\nconnection\nFigure 2: Like (2016c), our model follows a two-stream (downward, and\ndownward+rightward) convolutional architecture with residual connections; however, there are two\nsignificant differences in connectivity. First, our architecture incorporates downsampling and up-\nsampling, such that the inner parts of the network operate over larger spatial scale, increasing com-\nputational efficiency. Second, we employ long-range skip-connections, such that each k-th layer\nprovides a direct input to the (K \u2014 k)-th layer, where K is the total number of layers in the net-\nwork. The network is grouped into sequences of six layers, where most sequences are separated by\ndownsampling or upsampling."}, {"section_index": "4", "section_name": "2.5 REGULARIZATION USING DROPOUT", "section_text": "We apply our model to modeling natural images in the CIFAR-10 data set. We achieve state-of-the-\nart results in terms of log-likelihood, and generate images with coherent global structure."}, {"section_index": "5", "section_name": "3.1 UNCONDITIONAL GENERATION ON CIFAR-10", "section_text": "We apply our PixelCNN model, with the modifications as described above, to generative modeling o:\nthe images in the CIFAR- 10 data set. For the encoding part of the PixelCNN, the model uses 3 Resne\nblocks consisting of 5 residual layers, with 2 x 2 downsampling in between. The same architecture\nis used for the decoding part of the model, but with upsampling instead of downsampling in betweer\nblocks. All residual layers use 192 feature maps and a dropout rate of 0.5. Table|I]shows the state.\n\nof-the-art test log-likelihood obtained by our model. Figure[3]|shows some samples generated by the\nmodel.\nThe PixelCNN model is powerful enough to overfit on training data. Moreover, rather than just\nreproducing the training images, we find that overfitted models generate images of low perceptual\nquality, as shown in Figure[8| One effective way of regularizing neural networks is dropout\ntava et al.| 2014). For our model, we apply standard binary dropout on the residual path after the first\nconvolution. This is similar to how dropout is applied in the wide residual networks of\n\n(& Komodakis] (2016). Using dropout allows us to successfully train high capacity models while\n\navoiding overfitting and producing high quality generations (compare figure/8]and figure]3).\nFigure 3: Samples from our PixelCNN model trained on CIFAR-10.\nTable 1: Negative log-likelihood for generative models on CIFAR-10 expressed as bits per sub-pixel\nNext, we follow (2016c) in making our generative model conditional on the\nclass-label of the CIFAR-10 images. This is done by linearly projecting a one-hot encoding of the\nclass-label into a separate class-dependent bias vector for each convolutional unit in our network. We\nfind that making the model class-conditional makes it harder to avoid overfitting on the training data:\nour best test log-likelihood is 2.94 in this case. Figure fj] hows samples from the class-conditional\nmodel, with columns 1-10 corresponding the 10 classes in CIFAR-10. The images clearly look\nqualitatively different across the columns and for a number of them we can clearly identify their\nclass label.\nFigure 4: Class-conditional samples from our PixelCNN for CIFAR-10 (left) and real CIFAR-1\nimages for comparison (right).\nIt is hypothesized that the size of the receptive field and additionally the removal of blind spots i\nthe receptive field are important for PixelCNN\u2019s performance {van den Oord et al.||2016b). Indee\nspecifically introduced an improvement over the previous PixelCND\nmodel to remove the blind spot in the receptive field that was present in their earlier model.\nHere we present the surprising finding that in fact a PixelCNN with rather small receptive field can\nattain competitive generative modelling performance on CIFAR-10 as long as it has enough capacity.\nSpecifically, we experimented with our proposed PixelCNN++ model without downsampling blocks\nand reduce the number of layers to limit the receptive field size. We investigate two receptive field\nsizes: 11x5 and 15x8, and a receptive field size of 11x5, for example, means that the conditional\ndistribution of a pixel can depends on a rectangle above the pixel of size 11x5 as well as \u2014\u2014 ut 1 = 5x1\nblock to the left of the pixel.\nAs we limit the size of the receptive field, the capacity of the network also drops significantly since\nit contains many fewer layers than a normal PixelCNN. We call the type of PixelCNN that\u2019s simply\nlimited in depth \u201cPlain\u201d Small PixelCNN. Interestingly, this model already has better performance\nthan the original PixelCNN in (2016b) which had a blind spot. To increase\ncapacity, we introduced two simple variants that make Small PixelCNN more expressive without\ngrowing the receptive field:\ne NIN (Network in Network): insert additional gated ResNet blocks with 1x1 convolution be-\ntween regular convolution blocks that grow receptive field. In this experiment, we inserted\n3 NIN blocks between every other layer.\n\ne Autoregressive Channel: skip connections between sets of channels via 1x1 convolution\ngated ResNet block.\nBoth modifications increase the capacity of the network, resulting in improved log-likelihood as\nshown in Table [2] Although the model with small receptive field already achieves an impressive\nlikelihood score, its samples do lack global structure, as seen in Figure[5]\nTable 2: CIFAR-10 bits per sub-pixel for Small PixelCNN\nFigure 5: Samples from 3.03 bits/dim Small PixelCNN"}, {"section_index": "6", "section_name": "3.4 ABLATION EXPERIMENTS", "section_text": "In order to test the effect of our modifications to PixelCNN, we run a number of ablation experiments\nwhere for each experiment we remove a specific modification."}, {"section_index": "7", "section_name": "3.4.2 CONTINUOUS MIXTURE LIKELIHOOD INSTEAD OF DISCRETIZATION", "section_text": "odel Bits per sub-pixe\neld=11x5, Plain 3.11\n\neld=11x5, NIN 3.09\neld=11x5, Autoregressive Channel 3.07\neld=15x8, Plain 3.07\n\neld=15x8, NIN 3.04\neld=15x8, Autoregressive Channel 3.03\n\nAe fee A\n\neee |\n\nee\n\nig OS jem\nPra oe:\nIn order to test the contribution of our logistic mixture likelihood, we re-run our CIFAR-10 experi-\nment with the 256-way softmax as the output distribution instead. We allow the 256 logits for each\nsub-pixel to linearly depend on the observed value of previous sub-pixels, with coefficients that are\ngiven as output by the model. Our model with softmax likelihood is thus strictly more flexible than\nour model with logistic mixture likelihood, although the parameterization is quite different from that\nused by|van den Oord et al.|(2016c). The model now outputs 1536 numbers per pixel, describing the\nlogits on the 256 potential values for each sub-pixel, as well as the coefficients for the dependencies\nbetween the sub-pixels. Figure|6]shows that this model trains more slowly than our original model.\nIn addition, the running time per epoch is significantly longer for our tensorflow implementation.\nFor our architecture, the logistic mixture model thus clearly performs better. Since our architecture\n\ndiffers from that of in other ways as well, we cannot say whether this\n\nwould also apply to their model.\nInstead of directly modeling the discrete pixel values in an image, it is also possible to de-quantize\nthem by adding noise from the standard uniform distribution, as used by|Uria et al.|\nand modeling the data as being continuous. The resulting model can be interpreted as a variational\nautoencoder (Kingma & Welling| 2013} (2014), where the dequantized pixels z form\na latent code whose prior distribution is captured by our model. Since the original discrete pixels x\ncan be perfectly reconstructed from z under this model, the usual reconstruction term vanishes from\nbits per dim\nyobs\n\n34\n\n32\n0\n\n\u2014 original\n\u2014 softmax likelihood\n\n\u2018epochs\n\n18\nFigure 6: Training curves for our model with logistic mixture likelihood versus our model with\nsoftmax likelihood.\nthe variational lower bound. The entropy of the standard uniform distribution is zero, so the term\nthat remains is the log likelihood of the dequantized pixels, which thus gives us a variational lowe:\nbound on the log likelihood of our original data.\nWe re-run our model for CIFAR-10 using the same model settings as those used for the 2.92 bit\nper dimension result in Table [I] but now we remove the discretization in our likelihood model an\ninstead add standard uniform noise to the image data. The resulting model is a continuous mixtur\nmodel in the same class as that used by/Theis et al.|(2012);/Uria et al.|(2013);/Theis & Bethge|(2015\nand others. After optimization, this model gives a variational lower bound on the data log likelihoo\nof 3.11 bits per dimension. The difference with the reported 2.92 bits per dimension shows th\nbenefit of using discretization in the likelihood model."}, {"section_index": "8", "section_name": "3.4.3 NO SHORT-CUT CONNECTIONS", "section_text": "bits per dim\n\n60\n\n55\n\ni\n\n35\n\n30\n0\n\n\u2014 original\n\u2014 no short-cuts\n\n50\nbits per dim\n\n60\n\n55\n\n35\n\n30\n0\n\n\u2014 original\n\u2014 no short-cuts\n\n50\nFigure 7: Training curves for our model with and without short-cut connections."}, {"section_index": "9", "section_name": "3.4.4 NO DROPOUT", "section_text": "We re-run our CIFAR-10 model without dropout regularization. The log-likelihood we achieve on\nthe training set is below 2.0 bits per sub-pixel, but the final test log-likelihood is above 6.0 bits per\nNext, we test the importance of the additional parallel short-cut connections in our model, indicated\nby the dotted lines in Figure|2| We re-run our unconditional CIFAR-10 experiment, but remove the\nshort-cut connections from the model. As seen in Figure{7| the model fails to train without these\nconnections. The reason for needing these extra short-cuts is likely to be our use of sub-sampling,\nwhich discards information that otherwise cannot easily be recovered,"}, {"section_index": "10", "section_name": "4 CONCLUSION", "section_text": "We presented PixelCNN++, a modification of PixelCNN using a discretized logistic mixture like-\nlihood on the pixels among other modifications. We demonstrated the usefulness of these mod-\n\nifications with state-of-the-art results on CIFAR-10. Our code is made available at\n\nTharathih cam/nneanai /nivel\u2014cnn and can eacdlyv he adanted for nce an other data cetc"}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components esti-\nmation. arXiv preprint arXiv: 1410.8516, 2014.\nNal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray\nKavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv: 1610.10099, 20 16a.\nNal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex\nGraves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv: 1610.00527, 2016b\nDiederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. Proceedings of the 2nd\nInternational Conference on Learning Representations, 2013.\nsub-pixel. At no point during training does the unregularized model get a test-set log-likelihood\nbelow 3.0 bits per sub-pixel. Contrary to what we might naively expect, the perceptual quality of\nthe generated images by the overfitted model is not great, as shown in Figure\nFigure 8: Samples from intentionally overfitted PixelCNN model trained on CIFAR-10, with train\nlog-likelihood of 2.0 bits per dimension: Overfitting does not result in great perceptual quality.\nDiederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling.\nImproving variational inference with inverse autoregressive flow. In Advances in Neural Informa-\ntion Processing Systems, 2016.\nDanilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi\nmate inference in deep generative models. In ICML, pp. 1278-1286, 2014.\nLucas Theis and Matthias Bethge. Generative image modeling using spatial Istms. In Advances i\nNeural Information Processing Systems, pp. 1927-1935. 2015.\nLucas Theis, Reshad Hosseini, and Matthias Bethge. Mixtures of conditional gaussian scale mix-\ntures applied to multiscale image representations. PloS one, 7(7):e39857, 2012.\nAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves,\nNal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for\nraw audio. arXiv preprint arXiv: 1609.03499, 2016a.\nAaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks\nIn International Conference on Machine Learning (ICML), 2016b.\nAaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko-\nray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprint\narXiv: 1606.05328. 2016c.\nBenigno Uria, Iain Murray, and Hugo Larochelle. Rnade: The real-valued neural autoregressive\ndensity-estimator. In Advances in Neural Information Processing Systems, pp. 2175-2183, 2013."}]
rJqFGTslg
[{"section_index": "0", "section_name": "PRUNING FILTERS FOR EFFICIENT CONVNETS", "section_text": "Asim Kaday\nUniversity of Maryland\nhaoli@cs.umd.edu\nUniversity of Maryland\nUniversity of Marylanc\nhjs@cs.umd.edu\nThe success of CNNs in various applications is accompanied by a significant\nincrease in the computation and parameter storage costs. Recent efforts toward\nreducing these overheads involve pruning and compressing the weights of various\nlayers without hurting original accuracy. However, magnitude-based pruning of\nweights reduces a significant number of parameters from the fully connected layers\nand may not adequately reduce the computation costs in the convolutional layers\ndue to irregular sparsity in the pruned networks. We present an acceleration method\nfor CNNs, where we prune filters from CNNs that are identified as having a small\neffect on the output accuracy. By removing whole filters in the network together\nwith their connecting feature maps, the computation costs are reduced significantly\nIn contrast to pruning weights, this approach does not result in sparse connectivity\npatterns. Hence, it does not need the support of sparse convolution libraries and\ncan work with existing efficient BLAS libraries for dense matrix multiplications\nWe show that even simple filter pruning techniques can reduce inference costs for\nVGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining\nclose to the original accuracy by retraining the networks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The ImageNet chavenge has led to Seniticant advancements in exploring various architectural\n\n(2015) ):|S\nat the networks ae grown deeper, with an overall increase in the number of parameters and\nconvolution operations. These high capacity networks have significant inference costs especially\nwhen used with embedded sensors or mobile devices where computational and power resources\nmay be limited. For these applications, in addition to accuracy, computational efficiency and small\nnetwork sizes are crucial enabling factors (Szegedy et al.| (2015b)). In addition, for web services\nthat provide image search and image classification APIs that operate on a time budget often serving\nhundreds of thousands of images per second. benefit significantly from lower inference times.\nThere has been a Sgnificant amour of work on reducing the storage and computation costs by model\n\nOar Roe (Le Cun et al. ;[Hassibi & Stork] (1993); [Srinivas & Babul] (2015); [Han et al.\n;|Mariet & Sra](2016| an Teohitp (2015}/2016b) report impressive compression rates\non AlexNer krizhesky otal ]@2012))-and-VGGNet(Simonyan & Zisserman\n\n(2015)) by pruning\n\nweights with small magnitudes and then retraining without hurting the overall accuracy. However,\npruning parameters does not necessarily reduce the computation time since the majority of the\nparameters removed are from the fully connected layers where the computation cost is low, e.g., the\nfully connected layers of VGG-16 occupy 90% of the total parameters but only contribute less than\n1% of the overall floating point operations (FLOP). They also demonstrate that the convolutional\nlayers can be compressed and accelerated (Iandola et al.|(2016)), but additionally require sparse\n*Work done at NEC Labs\nt Supported in part by the NSF under Grant IIS-13-2079\nIgor Durdanovic"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Recent work on CNNs have yielded deep architectures with more efficient design (Szegedy et al\n\n(2015a]b); [He & Sun} (2015); |He et al.|(2016)), in which the fully connected layers are replaced wit\n\naverage pooling layers (Lin et al.|(2013);|He et al. (2016), which reduces the number of parameters\nsignificantly. The computation cost is also reduced by downsampling the image at an early stage\n\nto reduce the size of feature maps (He & Sun|(2015)). Nevertheless, as the networks continue to\nbecome deeper, the computation costs of convolutional layers continue to dominate.\nCNNs with large capacity usually have significant redundancy among different filters and feature\nchannels. In this work, we focus on reducing the computation cost of well-trained CNNs by pruning\nfilters. Compared to pruning weights across the network, filter pruning is a naturally structured way\nof pruning without introducing sparsity and therefore does not require using sparse libraries or any\nspecialized hardware. The number of pruned filters correlates directly with acceleration by reducing\nthe number of matrix multiplications, which is easy to tune for a target speedup. In addition, instead\nof layer-wise iterative fine-tuning (retraining), we adopt a one-shot pruning and retraining strategy to\nsave retraining time for pruning filters across multiple layers, which is critical for pruning very deep\nnetworks. Finally, we observe that even for ResNets, which have significantly fewer parameters and\ninference costs than AlexNet or VGGNet, still have about 30% of FLOP reduction without sacrificing\ntoo much accuracy. We conduct sensitivity analysis for convolutional layers in ResNets that improves\nthe understanding of ResNets.\nThe early work by{Le Cun et al.|(1989) introduces Optimal Brain Damage, which prunes weights\nwith a theoretically justified saliency measure. Later, |Hassibi & Stork|(1993) propose Optimal Brain\nSurgeon to remove unimportant weights determined by the second-order derivative information\n\n(2016) reduce the network redundancy by identifying a subset of diverse neurons that\n\ndoes not require retraining. However, this method only operates on the fully-connected layers and\nintroduce sparse connections.\nTo reduce the computation costs of the convolutional layers, past work have proposed to approximate\nconvolutional operations by representing the weight matrix as a low rank product of two smaller\n\nmatrices without changing the original number of filters (Denil et al_|(2013);|Jaderberg et al.|(2014)\nZhang et al.| (2015bja); 2016); (2016)). Other approaches to reduce the\nconvolutional\n\noverheads include using FFT based convolutions (Mathieu et al.| (2013)) and fast\nconvolution using the Winograd algorithm (Lavin & Gray (2016}). Additionally, quantization (Han\n(2016b)) and binarization (Rastegari et al.|(2016);|Courbariaux & Bengio (2016)) can be used\nto reduce the model size and lower the computation overheads. Our method can be used in addition\nto these techniques to reduce computation costs without incurring additional overheads.\nSeveral work have studied removing redundant feature maps from a well trained network (Anwar et al.\n\n(2015);|Polyak & Wolf|(2015)). introduce a three-level pruning of the weights\n\nand locate the pruning candidates using particle filtering, which selects the best combination from\na number of random generated masks. detect the less frequently activated\nfeature maps with sample input data for face detection applications. We choose to analyze the\nfilter weights and prune filters with their corresponding feature maps using a simple magnitude\nbased measure, without examining possible combinations. We also introduce network-wide holistic\napproaches to prune filters for simple and complex convolutional network architectures.\nConcurrently with our work, there is a growing interest in training compact CNNs with sparse\nconstraints (Lebedev & Lempitsky| (2016); |Zhou et al. (2016); Wen et al. (2016}). Lebedev &\nLempitsky| (2016) leverage group-sparsity on the convolutional filters to achieve structured brain\ndamage, i.e., prune the entries of the convolution kernel in a group-wise fashion. [Zhou et al.|(2016)\nadd group-sparse regularization on neurons during training to learn compact CNNs with reduced\nfilters. [Wen et al-|(2016) add structured sparsity regularizer on each layer to reduce trivial filters,\nchannels or even lavers. In the filter-level pruning. all above work use 5 ;-norm as a regularizer.\nSimilar to the above work, we use \u00a2;-norm to select unimportant filters and physically prune them\nOur fine-tuning process is the same as the conventional training procedure, without introducing\nadditional regularization. Our approach does not introduce extra layer-wise meta-parameters for the\nregularizer except for the percentage of filters to be pruned, which is directly related to the desirec\nspeedup. By employing stage-wise pruning, we can set a single pruning rate for all layers in one\nstage."}, {"section_index": "3", "section_name": "3 PRUNING FILTERS AND FEATURE MAPS", "section_text": "Let n; denote the number of input channels for the ith convolutional layer and h;/w; be the\nheight/width of the input feature maps. The convolutional layer transforms the input feature maps\nx; \u20ac R\u2122%*'* into the output feature maps x;41 \u20ac R\u2122+1*!+1*i+1, which are used as in:\nput feature maps for the next convolutional layer. This is achieved by applying n;+1 3D filters\nFig \u20ac IR\u2122**** on the n; input channels, in which one filter generates one feature map. Each\nfilter is composed by n; 2D kernels K \u20ac R*** (e.g., 3 x 3). All the filters, together, constitute\nthe kernel matrix F; \u20ac R\u2122%*\"+1****_ The number of operations of the convolutional layer is\nnigingk?hi41wisi. As shown in Figure[I| when a filter F;,; is pruned, its corresponding feature\nmap X;+1,; is removed, which reduces njk\u201dh;+1wi+1 Operations. The kernels that apply on the\nremoved feature maps from the filters of the next convolutional layer are also removed, which saves\nan additional nitgk*hipowise operations. Pruning m filters of layer i will reduce m/nj;+1 of the\ncomputation cost for both layers 7 and 7 + 1.\nFigure 1: Pruning a filter results in removal of its corresponding feature map and related kernels it\nthe next layer.\nOur method prunes the less useful filters from a well-trained model for computational efficiency\nwhile minimizing the accuracy drop. We measure the relative importance of a filter in each layer\nby calculating the sum of its absolute weights )> |F;,;|, i.e., its \u00a2:-norm ||F;,;||1. Since the number\nof input channels, n;, is the same across filters, }> |F;,;| also represents the average magnitude\nof its kernel weights. This value gives an expectation of the magnitude of the output feature map\nFilters with smaller kernel weights tend to produce feature maps with weak activations as compared\nto the other filters in that layer. Figure [2(a)]illustrates the distribution of filters\u2019 absolute weights\nsum for each convolutional layer in a VGG-16 network trained on the CIFAR-10 dataset, where the\ndistribution varies significantly across layers. We find that pruning the smallest filters works bette1\nin comparison with pruning the same number of random or largest filters (Section Compared\nto other criteria for activation-based feature map pruning (Section|4.5), we find \u00a2;-norm is a good\ncriterion for data-free filter selection.\nThe procedure of pruning m filters from the ith convolutional layer is as follows:\nkernel matrix:\n\nXj\n\nny\n\nFis;\n[\n\nNi+L\n\nXi+1\n\nNil\n\nNite\n\nXi+2\n. For each filter F;,;, calculate the sum of its absolute kernel weights s; = >)\", >> |Ki|-\n\n2. Sort the filters by sj.\n\n. Prune m filters with the smallest sum values and their corresponding feature maps. The\nkernels in the next convolutional layer corresponding to the pruned feature maps are alsc\nremoved.\n\n. A new kernel matrix is created for both the ith and 7 + 1th layers, and the remaining kerne\nweights are copied to the new model.\nRelationship to pruning weights Pruning filters with low absolute weights sum is similar to pruning\nlow magnitude weights (Han et al.|(2015)). Magnitude-based weight pruning may prune away whole\nfilters when all the kernel weights of a filter are lower than a given threshold. However, it requires\na careful tuning of the threshold and it is difficult to predict the exact number of filters that will\neventually be pruned. Furthermore, it generates sparse convolutional kernels which can be hard to\naccelerate given the lack of efficient sparse libraries, especially for the case of low-sparsity.\nRelationship to group-sparse regularization on filters Recent work (Zhou et al.| (2016); |Wen\nfeat|pore) apply group-sparse regularization (ya ||Fi,j 2 or \u20ac2,1-norm) on convolutional filters\nwhich also favor to zero-out filters with small /2-norms, i.e. F;,; = 0. In practice, we do not observe\nnoticeable difference between the /2-norm and the \u00a2;-norm for filter selection, as the important\nfilters tend to have large values for both measures (Appendix|6.1). Zeroing out weights of multiple\nfilters during training has a similar effect to pruning filters with the strategy of iterative pruning and\nretraining as introduced in Section[3.4]"}, {"section_index": "4", "section_name": "3.2 DETERMINING SINGLE LAYER\u2019S SENSITIVITY TO PRUNING", "section_text": "To understand the sensitivity of each layer, we prune each layer independently and evaluate thi\nresulting pruned network\u2019s accuracy on the validation set. Figure[2(b)|shows that layers that maintai1\ntheir accuracy as filters are pruned away correspond to layers with larger slopes in Figure|2\nthe contrary, layers with relatively flat slopes are more sensitive to pruning. We empirically determin\nthe number of filters to prune for each layer based on their sensitivity to pruning. For deep network:\nsuch as VGG-16 or ResNets, we observe that layers in the same stage (with the same feature may\nsize) have a similar sensitivity to pruning. To avoid introducing layer-wise meta-parameters, we us\nthe same pruning ratio for all layers in the same stage. For layers that are sensitive to pruning, wi\nprune a smaller percentage of these layers or completely skip pruning them.\nTo prune filters across multiple layers, we consider two strategies for layer-wise filter selection:\nCCIFAR-10, VGG-16 CIFARLO, VGG-16, pruned smallest fiers 94 CIFARI0, VGG-16, prune smallest filters. retrain 20 epochs\n\nio 109,\nconv a\n\u2014 conv2 20 7 a os\nconv = \u00b031[S conv 1 '\nconv 4 lee conv.2 64 e+ conv\nconv sjol|** conv 3 228 o2||e-* conv 3\nconv + conv4 128 + conv\nconv 7 > eo|[e-e conv.5 256 z |e cons\nconve, EB |Jee conv.6 256 E ail|ee conve\n~ conv9 | s0|Je-\u00a9 conv_7 256 Elle conv7\nconv 10 \u00a9 conv.8 512 \u00a9 conve\nconv 11 He conv 9 512 le\u00bb conve\nconv 12 gol |* * convia0 512 2 conv10\nconv 13 \u00a9 convi1512 wo|[e* conv\n2o|{\u00b0 \u00a9 conv 12 512 \u00a9 convi2\n\u00a9 conv13512 22 conv\n10) 8,\nrr a a ee 0 Es a0 % Cy Too \u00b0 70 a0 % Ey Too\nfier index /#fters (6) Fiters Pruned Awayi%) Fiters Prunea awayi%)\n\n(a) Filters are ranked by sj (b) Prune the smallest filters (c) Prune and retrain\nFigure 2: (a) Sorting filters by absolute weights sum for each layer of VGG-16 on CIFAR-10. The\nx-axis is the filter index divided by the total number of filters. The y-axis is the filter weight sum\ndivided by the max sum value among filters in that layer. (b) Pruning filters with the lowest absolute\nweights sum and their corresponding test accuracies on CIFAR-10. (c) Prune and retrain for each\nsingle layer of VGG-16 on CIFAR-10. Some layers are sensitive and it can be harder to recover\naccuracy after pruning them.\nWe now discuss how to prune filters across the network. Previous work prunes the weights on a layer\nby layer basis, followed by iteratively retraining and compensating for any loss of accuracy (Han et al.\n(2015)). However, understanding how to prune filters of multiple layers at once can be useful: 1) For\ndeep networks, pruning and retraining on a layer by layer basis can be extremely time-consuming 2)\nPruning layers across the network gives a holistic view of the robustness of the network resulting in a\nsmaller network 3) For complex networks, a holistic approach may be necessary. For example, for\nthe ResNet, pruning the identity feature maps or the second layer of each residual block results in\nadditional pruning of other layers.\nFigure|3|illustrates the difference between two approaches in calculating the sum of absolute weights\nThe greedy approach, though not globally optimal, is holistic and results in pruned networks witk\nhigher accuracy especially when many filters are pruned.\nr\nMi+2\n\nxX,\naD\nFigure 3: Pruning filters across consecutive layers. The independent pruning strategy calculates\nthe filter sum (columns marked in green) without considering feature maps removed in previous\nlayer (shown in blue), so the kernel weights marked in yellow are still included. The greedy pruning\nstrategy does not count kernels for the already pruned feature maps. Both approaches result in a\n(mei. \u2014 1) & (n;259 \u2014 1) kernel matrix.\nFigure 4: Pruning residual blocks with the projection shortcut. The filters to be pruned for the second\nlayer of the residual block (marked as green) are determined by the pruning result of the shortcut\nprojection. The first layer of the residual block can be pruned without restrictions.\nFor simpler CNNs like VGGNet or AlexNet, we can easily prune any of the filters in any convolutiona\nlayer. However, for complex network architectures such as Residual networks (He et al.](2016))\npruning filters may not be straightforward. The architecture of ResNet imposes restrictions and the\nfilters need to be pruned carefully. We show the filter pruning for residual blocks with projectior\nmapping in Figure[4| Here, the filters of the first layer in the residual block can be arbitrarily pruned\nas it does not change the number of output feature maps of the block. However, the correspondenc\u00a2\nbetween the output feature maps of the second convolutional layer and the identity feature map:\nmakes it difficult to prune. Hence, to prune the second convolutional layer of the residual block, the\ncorresponding projected feature maps must also be pruned. Since the identical feature maps are mor\nimportant than the added residual maps, the feature maps to be pruned should be determined by the\npruning results of the shortcut layer. To determine which identity feature maps are to be pruned, we\nuse the same selection criterion based on the filters of the shortcut convolutional layers (with 1 x |\nkernels). The second layer of the residual block is pruned with the same filter index as selected by\nthe pruning of the shortcut layer.\nAfter pruning the filters, the performance degradation should be compensated by retraining the\nnetwork. There are two strategies to prune the filters across multiple layers:\ne Independent pruning determines which filters should be pruned at each layer independent of\nother layers.\n\ne Greedy pruning accounts for the filters that have been removed in the previous layers.\nThis strategy does not consider the kernels for the previously pruned feature maps while\ncalculating the sum of absolute weights.\nNid\n\nN42\n\nXi+2\nXi+1\njection shorteut\n\nPlxi)\n\nresidual block\nWe find that for the layers that are resilient to pruning, the prune and retrain once strategy can be\nused to prune away significant portions of the network and any loss in accuracy can be regained by\nretraining for a short period of time (less than the original training time). However, when some filter:\nfrom the sensitive layers are pruned away or large portions of the networks are pruned away, it may\nnot be possible to recover the original accuracy. Iterative pruning and retraining may yield bette:\nresults, but the iterative process requires many more epochs especially for very deep networks."}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "Table 1: Overall results. The best test/validation accuracy during the retraining process is reported\nTraining a pruned model from scratch performs worse than retraining a pruned model, which may}\nindicate the difficulty of training a network with a small capacity.\nVGG-16 is a high-capacity network originally designed for the ImageNet dataset\n(2015)). Recently, Zagoruyko](2015) applies a slightly modified version of the model\n\non CIFAR-10 and achieves state of the art results. As shown in Table [2] VGG-16 on CIFAR-10\nconsists of 13 convolutional layers and 2 fully connected layers, in which the fully connected layers\ndo not occupy large portions of parameters due to the small input size and less hidden units. We use\n\nthe model described in|Zagoruyko} (2015) but add Batch Normalization (Ioffe & Szegedy|(2015))\nWe prune two types of networks: simple CNNs (VGG-16 on CIFAR-10) and Residual networks\nResNet-56/110 on CIFAR-10 and ResNet-34 on ImageNet). Unlike AlexNet or VGG (on ImageNet}\nhat are often used to demonstrate model compression, both VGG (on CIFAR-10) and Residual\n1etworks have fewer parameters in the fully connected layers. Hence, pruning a large percentage\nof parameters from these networks is challenging. We implement our filter pruning method in\nTorch7 (Collobert et al.|(2011)). When filters are pruned, a new model with fewer filters is created\nund the remaining parameters of the modified layers as well as the unaffected layers are copied into\nhe new model. Furthermore, if a convolutional layer is pruned, the weights of the subsequent batch\n1ormalization layer are also removed. To get the baseline accuracies for each network, we train each\nnodel from scratch and follow the same pre-processing and hyper-parameters as ResNet (He et al\n2016). For retraining, we use a constant learning rate 0.001 and retrain 40 epochs for CIFAR-1(\nund 20 epochs for ImageNet, which represents one-fourth of the original training epochs. Past work\n1as reported up to 3x original training times to retrain pruned networks (Han et al.|(2015)).\nModel Error(%) FLOP Pruned % Parameters Pruned %\nVGG-16 6.75 3.13 x 10' 1.5 x 10\nVGG-16-pruned-A 6.60 2.06 x 108 34.2% 5.4x 10\u00b0 64.0%\nVGG-16-pruned-A scratch-train 6.88\n\nResNet-56 6.96 1.25 x 10 8.5 x 10\u00b0\nResNet-56-pruned-A 6.90 1.12 x 108 10.4% 7.7x10\u00b0 9.4%\nResNet-56-pruned-B 6.94 9.09 x 10\" 27.6% 7.3x10\u00b0 13.7%\nResNet-56-pruned-B scratch-train 8.69\n\nResNet-110 6.47 2.53 x 10' 1.72 x 10\u00b0\nResNet-110-pruned-A 6.45 213 x 10\u00b0 15.9% 1.68 x 10\u00b0 2.3%\nResNet-110-pruned-B 6.70 1.55 x 108 38.6% 1.16 x 10\u00b0 32.4%\nResNet-110-pruned-B scratch-train 7.06\n\nResNet-34 26.77 3.64 x 10\u00b0 2.16 x 10\nResNet-34-pruned-A 27.44 3.08 x 10\u00b0 15.5% 1.99 x 107 7.6%\nResNet-34-pruned-B 27.83 2.76 x 10\u00b0 24.2% 1.93 x 107 10.8%\nResNet-34-pruned-C 27.52 3.37 x 10\u00b0 7.5% 2.01 x 10\" 7.2%\nTable 2: VGG-16 on CIFAR-10 and the pruned model. The last two columns show the number of\nfeature maps and the reduced percentage of FLOP from the pruned model.\nlayer type | wi x hi #Maps FLOP #Params | #Maps FLOP%\nConv_1 32 x 32 64 =1.8E+06 \u2014 1.7E+03 32 50%\nConv_2 32 x 32 64 3.8E+07 3.7E+04 64 50%\nConv_3 16 x 16 128 1.9E+07 7.4E+04 128 0%\nConv 4 16 x 16 128 3.8E+07 \u2014 1.5E+05 128 0%\nConv_5S 8x8 256 1.9E+07 2.9E+05 256 0%\nConv_6 8x8 256 3.8E+07 5.9E+05 256 0%\nConv_7 8x8 256 3.8E+07 5.9E+05 256 0%\nConv_8 4x4 512 1.9E+07_1.2E+06 256 50%\nConv_9 4x4 512 3.8E+07 2.4E+06 256 15%\nConv_10 4x4 512 3.8E+07 2.4E+06 256 15%\nConv_11 2x2 512 9.4E+06 2.4E+06 256 15%\nConv_12 2x2 512 9.4E+06 2.4E+06 256 15%\nConv_13 2x2 512 9.4E+06 2.4E+06 256 15%\nLinear 1 512 2.6E+05 2.6E+05 512 50%\nLinear 1 10 5.1E+03 5.1E+03 10 0%\nTotal 3.1E+08 \u2014 1.5E+07 34%\nlayer after each convolutional layer and the first linear layer, without using Dropout (Srivastava et al.\n(2014)). Note that when the last convolutional layer is pruned, the input to the linear layer is changed\nand the connections are also removed.\nAs shown in Figure [2(b)] each of the convolutional layers with 512 feature maps can drop at leas\n60% of filters without affecting the accuracy. Figure[2(c)|shows that with retraining, almost 90%\nof the filters of these layers can be safely removed. One possible explanation is that these filter:\noperate on 4 x 4 or 2 x 2 feature maps, which may have no meaningful spatial connections in suct\nsmall dimensions. For instance, ResNets for CIFAR-10 do not perform any convolutions for feature\nmaps below 8 x 8 dimensions. Unlike previous work (Zeiler & Fergus) C014): [Han et al] 2015). we\nobserve that the first layer is robust to pruning as compared to the next few layers. This is possible\nfor a simple dataset like CIFAR-10, on which the model does not learn as much useful filters as or\nImageNet (as shown in Figure. [5p. Even when 80% of the filters from the first layer are pruned, the\nnumber of remaining filters (12) is still larger than the number of raw input channels. However, wher\nremoving 80% filters from the second layer, the layer corresponds to a 64 to 12 mapping, whict\nmay lose significant information from previous layers, thereby hurting the accuracy. With 50% o!\nthe filters being pruned in layer | and from 8 to 13, we achieve 34% FLOP reduction for the same\naccuracy.\nFigure 5: Visualization of filters in the first convolutional layer of VGG-16 trained on CIFAR-10.\nFilters are ranked by \u00a2,-norm."}, {"section_index": "6", "section_name": "4.2 RESNET-56/110 ON CIFAR-10", "section_text": "ResNets for CIFAR-10 have three stages of residual blocks for feature maps with sizes of 32 x 32\n16 x 16 and 8 x 8. Each stage has the same number of residual blocks. When the number of feature\nmaps increases, the shortcut layer provides an identity mapping with an additional zero padding for\nthe increased dimensions. Since there is no projection mapping for choosing the identity feature\nmaps, we only consider pruning the first layer of the residual block. As shown in Figure{6} most of\nthe layers are robust to pruning. For ResNet-110, pruning some single layers without retraining even\nFigure 6: Sensitivity to pruning for the first layer of each residual block of ResNet-56/1 10\nimproves the performance. In addition, we find that layers that are sensitive to pruning (layers 20\n38 and 54 for ResNet-56, layer 36, 38 and 74 for ResNet-110) lie at the residual blocks close to the\nlayers where the number of feature maps changes, e.g., the first and the last residual blocks for each\nstage. We believe this happens because the precise residual errors are necessary for the newly addec\nempty feature maps.\nThe retraining performance can be improved by skipping these sensitive layers. As shown in Table[f\nResNet-56-pruned-A improves the performance by pruning 10% filters while skipping the sensitiv.\nlayers 16, 20, 38 and 54. In addition, we find that deeper layers are more sensitive to pruning that\nlayers in the earlier stages of the network. Hence, we use a different pruning rate for each stage. W\nuse p; to denote the pruning rate for layers in the ith stage. ResNet-56-pruned-B skips more layers (1\u00a2\n18, 20, 34, 38, 54) and prunes layers with p;=60%, p2=30% and p3=10%. For ResNet-110, the firs\npruned model gets a slightly better result with p;=50% and layer 36 skipped. ResNet-110-pruned-F\nskips layers 36, 38, 74 and prunes with p)=50%, p2=40% and p3=30%. When there are more that\ntwo residual blocks at each stage, the middle residual blocks may be redundant and can be easil;\npruned. This might explain why ResNet-110 is easier to prune than ResNet-56."}, {"section_index": "7", "section_name": "4.3 RESNET-34 ON ILSVRC2012", "section_text": "ResNets for ImageNet have four stages of residual blocks for feature maps with sizes of 56 x 5\n28 x 28, 14 x 14 and 7 x 7. ResNet-34 uses the projection shortcut when the feature maps ar\ndown-sampled. We first prune the first layer of each residual block. Figure[7|shows the sensitivity o\nthe first layer of each residual block. Similar to ResNet-56/110, the first and the last residual block\nof each stage are more sensitive to pruning than the intermediate blocks (i.e., layers 2, 8, 14, 16, 2\u00a2\n28, 30, 32). We skip those layers and prune the remaining layers at each stage equally. In Table[I] w\ncompare two configurations of pruning percentages for the first three stages: (A) p;=30%, p2=30%\np3=30%; (B) pi=50%, p2=60%, p3=40%. Option-B provides 24% FLOP reduction with about 14\nloss in accuracy. As seen in the pruning results for ResNet-50/110, we can predict that ResNet-34 i\nrelatively more difficult to prune as compared to deeper ResNets.\nWe also prune the identity shortcuts and the second convolutional layer of the residual blocks. As\nthese layers have the same number of filters, they are pruned equally. As shown in Figure [7(b)]\nthese layers are more sensitive to pruning than the first layers. With retraining, ResNet-34-pruned-C\nprunes the third stage with p3=20% and results in 7.5% FLOP reduction with 0.75% loss in accuracy.\nTherefore, pruning the first layer of the residual block is more effective at reducing the overall FLOP\npecuraey\n\npecuracy\n\nCIFARLO, ResNet-56, prune smallest filters\n\nCIFARLO, ResNet-56, prune smallest filters\n\nCIFARLO, ResNet-56, prune smallest filters\n\n9\n\n9\n\n9\n\n92\n\n\u00b02) Te conv_2 16 3 |[e== conv_20 32 3\" |[e== conv 30 64\ne+ conv.a 16 Elles conv.22 32 % Elles conv.a0 64\n+ conv 616 & || conv24 32 Ele conv a2 64\n91 e* conv 8 16 \u2018 91) ee conv 26 32 91) ee conv a4 64\no-* conv_1016 : o-* conv 2832 e+ conv.a6 64}. ,\n+ conv_12 16 : o* conv_3032 e+ conv.aa 64] \\'s\n9H e\u2014\u00a9 conv_14 16 \u2018 90H} e\u2014\u00a9 conv_32 32 90} e\u2014\u00a9 conv_50 64\n+ conv_1616 \u2018 o- conv_34 32 2+ convs264]\n+ come 16 ' + com 36 32 + coma 64\nba) 20 a0 60 coy 00, ba) 20 a0 60 0 00, ba) 20 60 0 00,\nFirs pruned Away) Firs pruned Away) Firs pruned Away)\nm CIFAR10, ResNet-110, prune smallest fiters m CIFARIO, ResNet-110, prune smallest fiters m CIFAR10, ResNet-110, prune smallest fiters\nconv? 16 conv 3832 == conv7A 64\nconv_4 16 conv 40 32 + conv_76 64\nconv_6 16 conv 4232|| a + conv_78 64\nconv 8 16 conv 44 32 + conv 80 64\nconv_1016 conv 46 32 = conv 82 64\nconv_12 16 conva832|/ gy + conv84 64\nconva16|| > conv.5032|| > #2 conv 86 64\n\u00a9 convia616|| \u00a7 conv 5232|| & conv 88 64\n= e+ convteis|| & convi5432|| 2 conv 90 64\n\u00a9 conv_2016 conv_56 32 + conv_92 64\n++ conv.22 16 conv_58 32 \u00a9 conv_94 64\n\u00a9 conv 2416 conv 60 32 \u00a9 conv_96 64\n90) \u00a9 \u00a9 conv_2616 cony_62 32 29) \u00a9 conv_98 64\n\u00a9 * conv.28 16 conv 64.32 \u00a9 conv_100 64\n# conv 3016 conv_66 32 #5 conv 202 64\n09\u00a7 5 a a a + com 2 26 Hy oo convise 32th 5 conv 104 64 Hy\nrites Puned Any) |\u00ae-\u00ae conv 34 16 rivers Pruned Aways) |\u00ae--# com_70 32 riers PrunedAway(t] \u00a9 conv_106 64\n#4 conv 3616 #4 conv 7232 # conv 108 64\nAccuracy\n\nImageNet, ResNet-34, prune smallest filters ImageNet, ResNet-34, prune the second layer of the basicblock\n\n15\nconv_2 64 70 o* 1-7, step=2\nconv_4 64 oe 9-15, step=2\n\n10 conv_6 64 oO + 17-27, ste\nconv_8 128 e+ 29 - 33, step=2\nconv_10 128 so\n\n6s conv_12 128 >\nconv_14 128 Eo\nconv_16 256 g\nconv_18 256 F\n\n60 conv_20 256 30\nconv_22 256\nconv_24 256 a\n\na conv_26 256\nconv_28 512 a\nconv_30 512\n\n0 20 40 60 go [\u00b0\u00b0 * conv_32 512 lao \u00b0o 20 40 60-60 Toe\n\nFilters Pruned Away(%) Parameter Pruned Away(%)\n\n(a) Pruning the first layer of residual blocks (b) Pruning the second layer of residual blocks\nFigure 7: Sensitivity to pruning for the residual blocks of ResNet-34.\nWe compare our approach with pruning random filters and largest filters. As shown in Figure [8]\npruning the smallest filters outperforms pruning random filters for most of the layers at different\npruning ratios. For example, smallest filter pruning has better accuracy than random filter pruning for\nall layers with the pruning ratio of 90%. The accuracy of pruning filters with the largest \u00a3;-norms\ndrops quickly as the pruning ratio increases, which indicates the importance of filters with larger\n\u00a2,-norms.\nFigure 8: Comparison of three pruning methods for VGG-16 on CIFAR-10: pruning the smalles'\nfilters, pruning random filters and pruning the largest filters. In random filter pruning, the order o!\nfilters to be pruned is randomly permuted.\nThe activation-based feature map pruning method removes the feature maps with weak activation\n\npatterns and their corresponding filters and kernels (Polyak & Wol! )), which needs sample\ndata as input to determine which feature maps to prune. A feature map x;41,; \u20ac RM+!*\"+1 is\n\ngenerated by applying filter F;,; \u20ac IR\":**** to feature maps of previous layer x; \u20ac R\u2122*\u2122*\"*, ie,\nXi41,j = Fi,j * Xi. Given N randomly selected images {x''}/_, from the training set, the statistics\nof each feature map can be estimated with one epoch forward pass of the NV sampled data. Note that\nwe calculate statistics on the feature maps generated from the convolution operations before batch\nnormalization or non-linear activation. We compare our \u00a2;-norm based filter pruning with feature map\npruning using the following criteria: Omean-mean(Xi,j) = + a mean(x?;), Onean-sta(Xi,j) =\nLON _, ypecn \\ 2 fT. OY LoN jyiony 2 fT. OY lLoN jinn\nthan pruning the second layer. This finding also correlates with the bottleneck block design for deeper\nResNets, which first reduces the dimension of input feature maps for the residual layer and then\nincreases the dimension to match the identity mapping.\nAccuracy:\n\n109,\n\n40\n\n2\n\nCIFAR1O, VGG-16, prune filters with smallest (,-norm\n\n100\n\nCIFAR10, VGG-16, prune random filters:\n\n109,\n\nCIFAR10, VGG-16, prune filters with largest (i-norm\n\nPhld1i\n\nconv_1 64\nconv 2 64\nconv 3128\nconv 4 128\nconv 5 256\nconv_6 256\nconv_7 256\nconv 8 512\nconv.9 $12\nconv 10512\ncony_11 512,\nconv 12512\ncony_13 512,\n\npecuracy\n\n2\n\nEa\n\na0 %\n\"Finis Pineda\n\nToo\n\n3 a0 % Cy\na\n\nToo\n\n\"Finis Pineda\n\nToo\npecuraey\n\n100\n\n40\n\n2\n\nSS\n\n100,\n\nSe\n\n100\n\na ee ee eee ee ee ee ek See\n\nPhld1i\n\nconv_1 64\nconv 2 64\nconv 3128\nconv 4 128\nconv 5 256\nconv_6 256\nconv_7 256\nconv 8 512\nconv.9 $12\nconv 10512\ncony_11 512,\nconv 12512\ncony_13 512,\n\npecuracy\n\npecuracy\n\nEa\n\na0 %\nFits Puned Awayi%)\n\nToo\n\n3 a0 % Cy\nFiters Puned Awayi%)\n\nToo\n\nFits Puned Awayi%)\n\nToo\nFigure 9: Comparison of activation-based feature map pruning for VGG-16 on CIFAR-10\nOvar-ty (ig) = var({||x?ll2}nL1), where mean, std and var are standard statistics (average\nstandard deviation and variance) of the input. Here, o,,.,-\u00a2, is the contribution variance of channe\ncriterion proposed in (2015), which is motivated by the intuition that an unimportant\nfeature map has almost similar outputs for the whole training data and acts like an additional bias.\nThe estimation of the criteria becomes more accurate when more sample data is used. Here we use\nthe whole training set (NV = 50, 000 for CIFAR-10) to compute the statistics. The performance of\nfeature map pruning with above criteria for each layer is shown in Figure[9] Smallest filter pruning\noutperforms feature map pruning with the criteria Opean-means Smean\u2014l;> Fmean\u2014l) ANd Oyar-\u00a2,. The\nOmean-sta Criterion has better or similar performance to ;-norm up to pruning ratio of 60%. However.\nits performance drops quickly after that especially for layers of conv_1, conv_2 and conv_3. We\nfind \u00a2,;-norm is a good heuristic for filter selection considering that it is data free."}, {"section_index": "8", "section_name": "5 CONCLUSIONS", "section_text": "Modern CNNs often have high capacity with large training and inference costs. In this paper we\npresent a method to prune filters with relatively low weight magnitudes to produce CNNs with\nreduced computation costs without introducing irregular sparsity. It achieves about 30% reduction in\nFLOP for VGGNet (on CIFAR-10) and deep ResNets without significant loss in the original accuracy\nInstead of pruning with specific layer-wise hayperparameters and time-consuming iterative retraining\nwe use the one-shot pruning and retraining strategy for simplicity and ease of implementation. By\nperforming lesion studies on very deep CNNs, we identify layers that are robust or sensitive to\npruning, which can be useful for further understanding and improving the architectures."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank the anonymous reviewers for their valuable feedback"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "109,\n\npecuraey\n\n40\n\n2\n\n109,\n\npecuraey\n\n40\n\n2\n\nCIFARI0, VGG-16,\n\nprune filters with smallest \u00e9,-norm\n\nyo CIFAR10, VGG-16, prune feature maps with smallest\n\n109,\n\nCCIFAR1O, VGG-16, prune feature maps with smallest,\n\nay\n\n= wet wol[ => cow 168 wl[=> owe\noF coni2 ot oF coni2 ot oF coni2 ot\n3 coma 28 3 coma 28 3 coma 28\noF conca 28 elles coma ize elles coma ize\nTS conv 5256 > \u201cIles coms 256 > \u201cIles coms 256 \\\n\u00a9\u00bb conv.6 256 Flee conv.6 256 \\ 8 |[es conv. 256 \u2018\n2 com? 256 Eo comy 256 . Eo comy 256\n\u00a9 conv8 512 : 49H) 6\u00bb conv.8 512 . He\u00bb conv8 512\n2 comiasia 2 comiasia . oo comr9 512\no 3 coneiosi2 o 3 coneiosi2 o 3 coneiosi2\n2 comet 12 20S conv 5i2 20S conv 5i2\no 3 comei2si2 on 1212 o 3 comei2si2\noo comet3 512 conv13512 oo comet3 512\n20 40 60 0 100 % 20 40 60 0 100 % 20 40 60 0 100\nFes Pane Away Fas Pred Avo Fas Pred Avo\n(a) \\|Finglla (b) Omean-mean (C) Omean-sta\nCIFARLO. VGG-16. pre feature maps with smallest tan, roo ,SIFARLO, VGG-16. prune feature maps with smallest. too IFARIO, VGG-16, prune feature maps with smallest\n= owe wl[> owe wl[=> owe\noF coni2 ot oF coni2 ot oF coni2 ot\n\u00a9 conv_3 128 \u00a9 conv_3 128 \u00a9 conv_3 128 we\n+ conv 4128 Sy aal| te convid 128 aal| te convid 128 \u2018\ne-* conv.5 256 \\. > lee conv.5 256 > lee conv.5 256 \u201c \\S\noe coms 256 wN FY coms 256 FY coms 256 % \\b\n2 com? 256 oe Eo comy 256 Eo comy 256 .\n\u00a9 conv8 512 Ny He\u00bb conv8 512 49H) 6\u00bb conv.8 512\n2 comiasia 2 comiasia 2 comiasia\no 3 coneiosi2 o 3 coneiosi2 o 3 coneiosi2\n2 comet 12 20S conv 5i2 20S conv 5i2\no 3 comei2si2 o 3 comei2si2 o 3 comei2si2\noo comet3 512 oo comet3 512 oo comet3 512\n20 40 60 0 100 % 20 40 60 0 100 % 20 40 60 0 100\nFas Pred Avo Fas Prune Avo Fas Pred Avo\n(d) Omean-\u20ac) (\u20ac) Oncan-ly (f) Ovar-ty\nSong Han, Jeff Pool, John Tran, and William Dally. Learning both Weights and Connections fo\nEfficient Neural Network. In N/PS, 2015.\nSong Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J\nDally. EIE: Efficient Inference Engine on Compressed Deep Neural Network. In JSCA. 2016a.\nSong Han, Huizi Mao, and William J Dally. Deep Compression: Compressing Deep Neural Networks\nwith Pruning, Trained Quantization and Huffman Coding. In JCLR, 2016b.\nBabak Hassibi and David G Stork. Second Order Derivatives for Network Pruning: Optimal Brain\nSurgeon. In NIPS, 1993.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image\nRecognition. In CVPR, 2016.\nForrest Iandola, Matthew Moskewicz, Khalidand Ashraf, Song Han, William Dally, and Keutzer Kurt.\nSqueezeNet: AlexNet-level accuracy with 50x fewer parameters and ; 1MB model size. arXiv\npreprint arXiv: 1602.07360, 2016.\nYani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training\nCNNs with Low-Rank Filters for Efficient Image Classification. In JCLR, 2016.\nSergey loffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by\nReducing Internal Covariate Shift. 2015.\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks\nwith low rank expansions. In BMVC, 2014.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet Classification with Deep Convo-\nlutional Neural Networks. In NJPS, 2012.\nYann Le Cun, John S Denker, and Sara A Solla. Optimal Brain Damage. In NIPS, 1989.\nZelda Mariet and Suvrit Sra. Diversity Networks. In JCLR, 2016.\nAdam Polyak and Lior Wolf. Channel-Level Acceleration of Deep Face Representations. [EEF\nAccess, 2015.\nMohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet\nClassification Using Binary Convolutional Neural Networks. In ECCV, 2016.\nMatthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights\nand activations constrained to+ 1 or-1. arXiv preprint arXiv: 1602.02830, 2016.\nMisha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep\nlearning. In NIPS, 2013.\nAndrew Lavin and Scott Gray. Fast Algorithms for Convolutional Neural Networks. In CVPR, 2016.\nBaoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse Convolu-\ntional Neural Networks. In CVPR, 2015.\nKaren Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image\nRecognition. In JCLR, 2015.\nXiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating Very Deep Convolutiona\nNetworks for Classification and Detection. IEEE T-PAMI, 2015a.\nXiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Efficient and accurate\napproximations of nonlinear convolutional networks. In CVPR, 2015b.\nHao Zhou, Jose Alvarez, and Fatih Porikli. Less Is More: Towards Compact CNNs. In ECCV, 2016\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.\nDropout: A Simple Way to Prevent Neural Networks from Overfitting. JMLR, 2014.\nSergey Zagoruyko. 92.45% on CIFAR-10 in Torch. http: //torch.ch/blog/2015/07/30/\ncifar. html) 2015.\nWe compare /;-norm with \u00a22-norm for filter pruning. As shown in Figure[10] \u00a2;-norm works slightly\nbetter than /-norm for layer conv_2. There is no significant difference between the two norms for\nother layers.\nAccuracy\n\n109,\n\n80\n\n60\n\n20\n\nCIFAR10, VGG-16, prune filters with smallest f-norm\n\nCIFAR10, VGG-16, prune filters with smallest fy-norm\n\n100\nry \\e e0|[*\u2014* conv_1 64\noo + conv_2 64\n+ conv_3128 s+ conv_3 128 \u2018\n= conv_4 128 oll ** conv4 128 Y\nee conv_5 256 > ||e-\u00a9 conv_5 256 4\n= conv_6 256 \u00a7 = conv_6 256 \u2018\nee conv_7 256 & ||e-\u00a9 conv_7 256\n\u00a9-* conv_8512 40/6 \u00a9 conv_8 512\n\u00a9-* conv_9 512 \u00a9-* conv_9 512\n\u00a9 conv_10512 \u00a9 conv_10512\n\u00a9 conv_11512 20)@ \u00a9 cony_11 512\n\ncony_12 512 \u00a9 -\u00a9 conv_12 512\n\nconv_13 512 \u00a9-* cony_13512\n\n20 40 0 30 00 \u00b0 20 40 0 30 Yo\nFiters Pruned Away(%) Fiters Pruned Away(%)\n(a) ||Fi,glla (b) ||Fi,jll2\nFigure 10: Comparison of \u00a2;-norm and \u00a2)-norm based filter pruning for VGG-16 on CIFAR-10."}, {"section_index": "11", "section_name": "6.2 FLOP AND WALL-CLOCK TIME", "section_text": "FLOP is a commonly used measure to compare the computation complexities of CNNs. It is easy tc\ncompute and can be done statically, which is independent of the underlying hardware and software\nimplementations. Since we physically prune the filters by creating a smaller model and then copy the\nweights, there are no masks or sparsity introduced to the original dense BLAS operations. Therefore\nthe FLOP and wall-clock time of the pruned model is the same as creating a model with smalle\nnumber of filters from scratch.\nWe report the inference time of the original model and the pruned model on the test set of CIFAR-10\nand the validation set of ILSVRC 2012, which contains 10,000 32 x 32 images and 50,000 224 x 224\nimages respectively. The ILSVRC 2012 dataset is used only for ResNet-34. The evaluation is\nconducted in Torch7 with Titan X (Pascal) GPU and cuDNN v5.1, using a mini-batch size 128. As\nshown in Table 33} the saved inference time is close to the FLOP reduction. Note that the FLOP\nnumber only considers the operations in the Conv and FC layers, while some calculations such as\nBatch Normalization and other overheads are not accounted.\nTable 3: The reduction of FLOP and wall-clock time for inference.\nModel FLOP Pruned % Time(s) Saved %\nVGG-16 3.13 x 10 1.23\nVGG-16-pruned-A 2.06 x 10\u00b0 34.2% 0.73 40.7%\nResNet-56 1.25 x 10 1.31\nResNet-56-pruned-B 9.09 x 10\u00b0 27.6% 0.99 24.4%\nResNet-110 2.53 x 10 2.38\nResNet-110-pruned-B_ 1.55 x 10\u00b0 38.6% 1.86 21.8%\nResNet-34 3.64 x 10\u00b0 36.02\nResNet-34-pruned-B 2.76 x 10\u00b0 24.2% 22.93 28.0%"}]
H1GEvHcee
[{"section_index": "0", "section_name": "ANNEALING GAUSSIAN INTO RELU: A NEW SAM-\nPLING STRATEGY FOR LEAKY-RELU RBM", "section_text": "Chun-Liang Li Siamak Ravanbakhsh Barnabas P\u00e9czos\nchunlial,mravanba, bapoczos}@cs -cmu.edu\nRestricted Boltzmann Machine (RBM) is a bipartite graphical model that is usec\nas the building block in energy-based deep generative models. Due to its numer\nical stability and quantifiability of its likelihood, RBM is commonly used witl\nBernoulli units. Here, we consider an alternative member of the exponential fam\nily RBM with leaky rectified linear units \u2014 called leaky RBM. We first study th:\njoint and marginal distributions of the leaky RBM under different leakiness, whic!\nleads to interesting interpretation of the leaky RBM model as truncated Gaussiat\ndistribution. We then propose a simple yet efficient method for sampling fron\nthis model, where the basic idea is to anneal the leakiness rather than the energy\n\u2014i.e., start from a fully Gaussian/Linear unit and gradually decrease the leakines:\nover iterations. This serves as an alternative to the annealing of the temperatur:\nparameter and enables numerical estimation of the likelihood that are more effi\ncient and far more accurate than the commonly used annealed importance sam\npling (AIS). We further demonstrate that the proposed sampling algorithm enjoy:\nrelatively faster mixing than contrastive divergence algorithm, which improves th\ntraining procedure without any additional computational cost."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In this paper, we are interested in deep generative models. One may naively classify these models\ninto a family of directed deep generative models trainable by back-propagation (e.g., Kingma &\nWelling, 2013; Goodfellow et al., 2014), and deep energy-based models, such as deep belief net-\nwork (Hinton et al., 2006) and deep Boltzmann machine (Salakhutdinov & Hinton, 2009). The\nbuilding block of deep energy-based models is a bipartite graphical model called restricted Boltz-\nmann machine (RBM). The RBM model consists of two layers, visible and hidden. The resulting\ngraphical model which can account for higher-order interactions of the visible units (visible layer)\nusing the hidden units (hidden layer). It also makes the inference easier that there are no interactions\nbetween the variables in each layer.\nThe conventional RBM uses Bernoulli units for both the hidden and visible units (Smolensky, 1986)\nOne extension is using Gaussian visible units to model general natural images (Freund & Haussler\n1994). For hidden units, we can also generalize Bernoulli units to the exponential family (Welling\net al., 2004; Ravanbakhsh et al., 2016).\nNair & Hinton (2010) propose a variation using Rectified Linear Unit (ReLU) for the hidden laye:\nwith a heuristic sampling procedure, which has promising performance in terms of reconstructior\nerror and classification accuracy. Unfortunately, due to its lack of strict monotonicity, RELU RBM\ndoes not fit within the framework of exponential family RBMs (Ravanbakhsh et al., 2016). In.\nstead we study leaky-ReLU RBM (leaky RBM) in this work and address two important issues i) <\nbetter training (sampling) algorithm for ReLU RBM and; ii) a better quantification of leaky RBM\n~i.e., evaluation of its performance in terms of likelihood.\nWe study some of the fundamental properties of leaky RBM, including its joint and marginal dis-\ntributions (Section 2). By analyzing these distributions, we show that the leaky RBM is a union of"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "truncated Gaussian distributions. In this paper, we show that training leaky RBM involves under-\nlying positive definite constraints. Because of this, the training can diverge if these constrains are\nnot satisfied. This is an issue that was previously ignored in ReLU RBM, as it was mainly used for\npre-training rather than generative modeling.\nOur contribution in this paper is three-fold: I) we systematically identify and address model con\nstraints in leaky RBM (Section 3); ID) for the training of leaky RBM, we propose a meta algo\nrithm for sampling, which anneals leakiness during the Gibbs sampling procedure (Section 3) anc\nempirically show that it can boost contrastive divergence with faster mixing (Section 5); III) W\ndemonstrate the power of the proposed sampling algorithm on estimating the partition function. Ii\nparticular, comparison on several benchmark datasets shows that the proposed method outperform:\nthe conventional AIS (Salakhutdinov & Murray, 2008) in terms of efficiency and accuracy (Sec\ntion 4). Moreover, we provide an incentive for using leaky RBM by showing that the leaky ReLl\nhidden units perform better than the Bernoulli units in terms of the model log-likelihood (Section 4).\nThe Boltzmann distribution is defined as p(x) = e~\"\")/Z where Z = S>, e~ *\u201c) is the partition\nfunction. Restricted Boltzmann Machine (RBM) is a Boltzmann distribution with a bipartite struc-\nture It is also the building block for many deep models (e.g., Hinton et al., 2006; Salakhutdinov &\nHinton, 2009; Lee et al., 2009), which are widely used in numerous applications (Bengio, 2009). The\nconventional Bernoulli RBM, models the joint probability p(v, h) for the visible units v \u20ac [0, 1]! and\nthe hidden units h \u20ac [0, 1] as p(v,h) x exp(\u2014E(v,h)), where E(v,h) = abv \u2014v' Wht deh.\nThe parameters are a \u20ac R\u2019, b \u20ac RY and W \u20ac R!*/. We can derive the conditional probabilities as\nJ\n\n. I\np(vj = lh) =o Ss Wijhy + ai and p(hj =1\\v) =o (= Wii + \u00bb) ;\n\nj=l i=l\nOne extension of Bernoulli RBM is replacing the binary visible units by linear units v \u20ac R* with\nindependent Gaussian noise. The energy function in this case is given by\nIo, 2 If\nB(v,h) => ao) Oe mii +07.\ni=l v\n\ni=1 j=1\nThe conditional distributions are as follows:\nJ\np(u;{h) =N So Wish and p(h; =1\nj=l\n\nI\nv)=0 (= Wii + \u00bb) ;\ni=l\nwhere V(j1, V) is a Gaussian distribution with mean ji and variance V. To simplify the notation,\nin the following we define 1; = an W;v; + bj \u2014 that is n; is the input to the j\u2019\u201d hidden layer\nneuron and similarly define v; = Vja1 W,;h; + a;. Using this notation the conditionals in the (2)\n\nare p(v;\\v;) = N (vj, 1) and p(h; = 1|n;) = o(n;).\nFrom (1) and (2), we can see that the mean of the p(h,;|v) is the nonlinearity of the hidden unit\nat nj = an Wi,;v; + bj \u2014 e.g., mean of the Bernoulli unit is the sigmoid function. From this\nperspective, we can extend the sigmoid function to other functions and thus allow RBM to have\nmore expressive power (Ravanbakhsh et al., 2016). In particular, it would be interesting to use\nrectified linear unit (ReLU) nonlinearity, f(7;) = max(0,17;), for generative modeling.\nTo simplify the notation, we assume a normalized data so that a; and o; is no longer required.\n\nThe energy function is accordingly simplified to E(v,h) = teil? \u2014v!Wh + bh (Note that the\nelimination does not influence the discussion and one can easily extend all the results in this paper\nto the model that includes a; and ;.).\nBy Ravanbakhsh et al. (2016), the conditional probability of the activation, assuming the nonlinear:\nity f(n;), is generally defined as p(h;|v) = exp (\u2014Dy(nj||hj) + g(hj)), where Df (n;||h;) is the\nBregman Divergence associated with f, and g(h;) is the base (or carrier) measure in the exponentia\nfamily which ensures the distribution is well-defined. The Bergman divergence, for strictly mono:\ntonic function f, is Dr(nj||hj) = \u2014njhy + F (nj) + F*(h;), where F with tp E (mi) = f (ny) is\nthe anti-derivative (integral) of f and F* is the anti-derivative of f~+ (i.e., f~'(f(n)) = 7); Note\nthat due to the strict monotonicity of f, f~1 is well-defined, and F and F* are commonly referrec\nto as conjugate duals.\nGiven the conditional distributions p(v|h) and p(h|v), the joint distribution p(v, h) from the genera\ntreatment for MRF model is (Yang et al., 2012; Ravanbakhsh et al., 2016)\np(v,h) x exp (vase (vi) + g(vi) )- Ler *(hy) + g(hj \u00bb)\n\ni=1 j=l\n2 h?\np(v, h) x exp v' Wh ell - J\n, 2 2\n\nnj >0\n\nnh?\n| a vi) Ss (\u00a2 | oe VF) + bh\n\nny <0\nNair & Hinton (2010) use an RBM with visible Gaussian unit and ReLU hidden activation functions\nfor pretraining. They suggest sampling from max(0, 7; +/V (0, o(7;)) for conditional sampling from\nthe hidden units (compare to (2)). However, this sampling heuristic does not suggest the parametric\nform of the joint ReLU-Gaussian distribution. This also means we cannot evaluate it using methods\nsuch as Annealed Importance Sampling that require access to this parametric form. In fact, only\nstrictly monotonic activation functions can derive feasible joint and conditional distributions in the\nexponential familly RBM and ReLU is not strictly monotonic Ravanbakhsh et al. (2016). Similar\nactivation functions that are monotonic are Softplus, f(7;) = log(1 + e\u201d\u201d) and leaky ReLU (Maas\nt al., 2013), defined as f(nj;) = max(cn;,7;), where c \u20ac (0, 1) is the leakiness parameter. In con-\ntrast to the ReLU RBM the joint parametric form of these two distributions are available. However,\nthe energy (logarithm of the joint probability) in the case of Softplus activation function contains a\npolylogarithmic term that requires evaluation of an infinite series; see Table | in Ravanbakhsh et al.\n(2016). For this reason, here we focus on Leaky-ReLU activation function.\nConsidering the leaky ReLU activation function f(7) = max(cn,7), using this formalism, the\nconditional distributions of hidden units in the leaky RBM simplifies to (see Appendix A.1 for\ndetails)\nN(mj,1), ify > 0\np(hjlv) = ee. ifn; <0.\nSince the visible units uses the identity function, the corresponding conditional distribution is a\nSaussian!\nGaussian\npivin) =< (\u00a5\nEni ,\nHaving these two conditional distributions is enough for training a leaky RBM model using con-\ntrastive divergence (Hinton, 2002) or some other alternatives (e.g., Tieleman, 2008; Tieleman &\nHinton, 2009).\n\u2018which can also be written as p(vi|h) = exp (\u2014D,(villvi) + g(vi)), where 4 = Y0j_, Wijhj and\nf(vi) = vi and D;(v;\\|v;) = (vi \u2014 v;)2 and g(v;) = \u2014 log V2r.\nand the corresponding visible marginal distribution is\nwhere W, is the j-th column of W.\nWe discuss the two types of these regions. For bounded regions, such as Rj in Figure 1, the integra-\ntion of (6) is also bounded, which results in a valid distribution. Before we discuss the unbounded\n\ncases, we define Q = J \u2014 an aj W,W, where aj = 1,,,50 + cly,<o. For the unboundec\n\nregion, if 2 \u20ac R!\u2122*! is a positive definite (PD) matrix, then the probability density is proportional to\na multivariate Gaussian distribution with mean pp = 71 (oy ajb; W;) and precision matrix 1)\n(covariance matrix Q~') but over an affine-constrained region. Therefore, the distribution of eact\nunbounded region can be treated as a truncated Gaussian distribution. The marginal distrubution can\nbe treated as a union of truncated Gaussain distribution. Note that leaky RBM is different from Su\net al. (2017), which use single truncated Gaussian distribution to model joint (conditional) distribu-\ntions and require approximated and more complicated sampling algorithms for truncated Gaussian\ndistribution. while leaky RBM only requires to samnle from Gaussian distributions.\na multivariate Gaussian distribution with mean pp = {2\"~ [| > _\n\n.;VWV5 } and precision matrix \u00a7;\n(covariance matrix Q~') but over an affine-constrained region. Therefore, the distribution of each\nunbounded region can be treated as a truncated Gaussian distribution. The marginal distrubution can\nbe treated as a union of truncated Gaussain distribution. Note that leaky RBM is different from Su\net al. (2017), which use single truncated Gaussian distribution to model joint (conditional) distribu-\ntions and require approximated and more complicated sampling algorithms for truncated Gaussian\ndistribution, while leaky RBM only requires to sample from Gaussian distributions.\nOn the other hand, if 2 is not PD, and the region R; contains the eigenvectors with negative eigen-\nvalues of 2, the integration of (6) over R; is divergent (infinite), which can not result in a valid\nprobability distribution. In practice, with this type of parameter, when we do Gibbs sampling on the\nconditional distributions, the sampling will diverge. However, it is unfeasible to check exponentially\nmany regions for each gradient update.\nTheorem 2. The above projection step (7) can be done by shrinking the singular values to be less\nthan 1.\nFigure 3: A three dimensional\nexample with 3 hidden units,\nwhere W; are orthogonal to\n2ach other.\n1\npiv) \u00ab exp \u20143U I- Ss wiwy \u2014c\u00a2 Ss ww, vt Ss b,W,v +c Ss bj W,v\nnj>0 nj <0 nj>0 5 <0\nFrom (6) we see that the marginal probability is determined by the affine constraints 7; > 0 or\n\nnj < 0 for all hidden units 7. By combinatorics, these constraints divide R! (the visible domain)\n\ninto at most M = yi (7) convex regions R,,:-- Ryy. An example with J = 2 and J = 3 is\n\nshown in Figure 1. If J > J, then we have at most 27 regions.\nThe proof is shown in Appendix 1. From Theorem | we can see that if the constraint 1 - WW!\nis PD, then one can guarantee that the distribution of every region is a valid truncated Gaussian\ndistribution. Therefore, we introduce the following projection step for each W after the gradient\nindate.\nwgmin ||W \u2014 Wiz\nst. I-WW'T>0\nThe proof is shown in Appendix C. The training algorithm of the leaky RBM is shown in Algo\nrithm 1. By using the projection step (7), we could treat the leaky RBM as the union of truncate:\nGaussian distributions, which uses weight vectors to divide the space of visible units into severa\nregions and use a truncated Gaussian distribution to model each region. Note that the leaky RB\nmodel is different from Su et al. (2016), which uses a truncated Gaussian distribution to model thi\nconditional distribution p(h!v) instead of the marginal distribution.\nThe empirical study about the divergent values and the necessity of the projection step is shown it\nAppendix D. Without the projection step, when we run Gibbs sampling for several iterations from the\nmodel, the sampled values will diverge because the model does not have a valid marginal distributior\np(v). It also implies that we cannot train leaky RBM with larger CD steps without projection, whict\nwould result in divergent gradients. The detailed discussion is shown in Appendix D.\nIf we set the leakiness c to be 1, then (6) becomes a simple multivariate Gaussian distribution\nN ((I- WW')-!W, (I \u2014WW')-1), which can be easily sampled without Gibbs sampling\nAlso, the projection step (7) guarantees it is a valid Gaussian distribution. Then we decrease the\nleakiness with a small \u20ac, and use samples from the multivariate Gaussian distribution when c = 1\nas the initialization to do Gibbs sampling. Note that the distribution of each region is a truncated\nGaussian distribution. When we only decrease the leakiness with a small amount, the resulted dis-\ntribution is a \u201csimilar\u201d truncated Gaussian distribution with more concentrated density. From this\nobservation, we could expect the original multivariate Gaussian distribution serves as a good initial-\nization. The one-dimensional example is shown in Figure 2. We then repeat this procedure until we\nreach the target leakiness. The algorithm can be seen as annealing the leakiness during the Gibbs\nsampling procedure. The meta algorithm is shown in Algorithm 2. Next, we show the proposed\nsampling algorithm can help both the partition function estimation and the training of leaky RBM."}, {"section_index": "3", "section_name": "| PARTITION FUNCTION ESTIMATION", "section_text": "It is known that estimating the partition function of RBM is intractable (Salakhutdinov & Murray.\n2008). Existing approaches, including Salakhutdinov & Murray (2008); Grosse et al. (2013); Liu\net al. (2015); Carlson et al. (2016) focus on using sampling to approximate the partition function ot\nthe conventional Bernoulli RBM instead of the RBM with Gaussian visible units and non-Bernoulli\nhidden units. In this paper, we focus on extending the classic annealed importance sampling (AIS)\nalgorithm (Salakhutdinov & Murray, 2008) to leaky RBM.\nGibbs sampling is the core procedure for RBM, including training, inference, and estimating the\npartition function (Fischer & Igel, 2012; Tieleman, 2008; Salakhutdinov & Murray, 2008). For ev-\nery task, we start from randomly initializing v by an arbitrary distribution q, and iteratively sample\nfrom the conditional distributions. Gibbs sampling guarantees the procedure result in the stationary\ndistribution in the long run for any initialized distribution g. However, if q is close to the target dis-\ntribution p, it can significantly shorten the number of iterations to achieve the stationary distribution.\nTable 1: The true partition function for Leaky-ReLU RBM with different number of hidden units.\nAssuming that we want to estimate the partition function Z of p(v) with p(v) = p*(v)/Z and\n\n\u00abx SS), a Ce h)), Salakhutdinov & Murray (2008) start from a initial distribution\npo(v) x >>), exp(\u2014Eo(v, h)), where computing the partition Zp of po(v) is tractable and we can\ndraw samples on po(v). me then use the \u201cgeometric path\u201d to anneal the intermediate distribution\nas pp (v) x pi(v) = do), exp (\u2014 Bx Eo(v, h) \u2014 (1 \u2014 8x) E(u, h)), where they grid 5; from 1 to 0. If\nwe let 89 = 1, we can draw samples v; from p;,(v) by using samples v,_; from pp_1(v) fork > 1\nvia Gibbs sampling. The partition function is then estimated via Z = 40 M =1 w, where\n\ney ey\n(i)\nK-1\n\nfe-2) PRW\n4) Pic_1(v\n\n_o(v\n\nPp"}, {"section_index": "4", "section_name": "4.1 STUDY ON TOY EXAMPLES", "section_text": "As we discussed in Section 3.1, leaky RBM with J hidden units is a union of 27 truncated Gaussiai\ndistributions. Here we perform a study on the leaky RBM with a small number hidden units. Sinc:\nin this example the number of hidden units is small, we can integrate out all possible configuration:\nof h. However, integrating a truncated Gaussian distribution with general affine constraints doe:\nnot have analytical solutions, and several approximations have been developed (e.g., Pakman &\nPaninski, 2014). To compare our results with the exact partition function, we consider a special casi\nthat has the following form:\n(20) # (\n\nJ\nI-So aww,\n\nj=l\n\n)\n\nwie\nWe randomly initialize W and use SVD to make columns orthogonal. Also, we scale ||W; || t\nsatisfy I - WW > 0. The leakiness parameter is set to be 0.01. For Salakhutdinov & Murray\n(2008) (AIS-Energy), we use 10\u00b0 particles with 10\u00b0 intermediate distributions. For the proposec\nmethod (AIS-Leaky), we use only 10* particles with 10\u00b0 intermediate distributions. In this smal\nproblem we study the cases when the model has 5, 10, 20 and 30 hidden units and 3072 visible units\nThe true log partition function log Z is shown in Table | and the difference between log Z and the\nestimates given by the two algorithms are shown in Table 2.\nSalakhutdinov & Murray (2008) use the initial distribution with independent visible units and with-\n2\n\nout hidden units. We consider application of AIS to the leaky-ReLU case with Eo(v,h) = te ;\n\nwhich results in a multivariate Gaussian distribution po(v). Compared with the meta algorithm\n\nshown in Algorithm 2 which anneals between leakiness, AIS anneals between energy functions.\np(v) x exp 5\" I- Ss ww, \u2014c\u00a2 Ss ww v\n\nnj >0 nj <0\nCompared to (6), it is equivalent to the setting where b = 0. Geometrically, every W; passes through\nthe origin. We further put the additional constraint W; 1 W;,Vi # j. Therefore. we divide the\nwhole space into 27 equally-sized regions. A three dimensional example is shown in Figure 3. Then\nthe partition function of this special case has the analytical form\nFrom Table 1, we observe that AIS-Leaky has significantly better and more stable estimations\nthan AIS-Energy especially and this gap increases as we increase the number of hidden units.\nAIS-Leaky achieves this with orders magnitude reduced computation \u2014e.g., here it uses ~.1%\nof resources used by conventional AIS. For example, when we increase J from 5 to 30, the bias (dif-\nference) of AIS-Leaky only increases from 0.02 to 0.13; however, the bias of AIS-Energy increases\nfrom 1.76 to 9.6. We further study the implicit connection between the proposed AIS-Leaky and\nAIS-Energy in Appendix E, which shows AIS-Leaky is a special case of AIS-Energy under certain\n-onditions.\nTable 2: The difference between the true partition function and the estimations of two algorithm:\nwith standard deviation.\nTable 3: The log-likelihood performance of Bernoulli-Gaussian RBM and leaky RBM\nIt is known that the reconstruction error is not a proper approximation of the likelihood (Hinton,\n2012). One commonly adopted way to compare generative models is to sample from the model,\nand visualize the images to check the quality. However, Theis et al. (2016) show the better visu-\nalization does not imply better likelihood. Also, the single layer model cannot adequately model\nthe complicated natural images (the result for Bernoulli-Gaussian RBM has been shown in Ran-\nzato & Hinton (2010)), which makes the visualization comparison difficult (Appendix F has few\nvisualization results).\nFortunately, our accurate estimate of the partition function for leaky RBM can produce a reli-\nable quantitative estimate of the representation power of leaky RBM. We compare the Bernoulli-\nGaussian RBM2, which has Bernoulli hidden units and Gaussian visible units. We trained both\nmodels with CD-20\u00b0 and momentum. For both model, we all used 500 hidden units. We initialized\nW by sampling from Unif(0, 0.01), a = 0, b = 0 and o = 1. The momentum parameter was 0.9 and\nthe batch size was set to 100. We tuned the learning rate between 10~! and 10~\u00b0. We studied two\nbenchmark data sets, including CIFAR10 and SVHN. The data was normalized to have zero mean\nand standard deviation of 1 for each pixel. The results of the log-likelihood are reported in Table 3.\nFrom Table 3, leaky RBM outperforms Bernoulli-Gaussian RBM significantly. The unsatisfactory\nperformance of Bernoulli-Gaussian RBM may be in part due to the optimization procedure. If we\ntune the decay schedule of the learning-rate for each dataset in an ad-hoc way, we observe the\nperformance of Bernoulli-Gaussian RBM can be improved by ~ 300 nats for both datasets. Also,\nincreasing CD-steps brings slight improvement. The other possibility is the bad mixing during the\nCD iterations. The advanced algorithms Tieleman (2008); Tieleman & Hinton (2009) may help.\nAlthough Nair & Hinton (2010) demonstrate the power of ReLU in terms of reconstruction error\nand classification accuracy, it does not imply its superior generative capability. Our study confirms\nleaky RBM could have much better generative performance compared to Bernoulli-Gaussian\nRBM.\nIn this section, we show the idea of annealing between leakiness benefit the mixing in Gibbs sam:\npling in other settings. A common procedure for comparison of sampling methods for RBM i:\nthrough visualization. Here, we are interested in more quantitative metrics and the practical benefit:\nof improved sampling. For this, we consider optimization performance as the evaluation metric.\nThe gradient of the log-likelihood function \u00a3L(O|Ugata) of general RBM models is\nOL(O|Vaata) _ E OE(v,h) E OE(u,h)\n20 h\\vaata 00 \u2014 Eva 30 .\nSince the second expectation in (9) is usually intractable, different approximation algorithms are\nused (Fischer & Igel, 2012).\nLog Likelhood\n\n\u201c1520 -2000\nA A\n\u201c1540 . ee -2020 LB \u201c\nJAR? yas\nol fer Ar \u201c\n1560 AE -2040 ot Fa 4\n26 Ae x\n8 8 ve\n\u201c1580 Ale? 2 2080 Pes a\nei 3 Ke\n: 4 Ee\n-1600 Mit Si -2080 228\"\nwe \u00b0 3 Ao 3 na\n\u201c1620 Bee? -e CD \u2014 2100 ee 8 -e CD\nalt > Mix aa > Mix\n\u201c1640 a \u201c -4 Leaky -2120 oy -& Leaky\n- PCD \u201c -= PCD\n-1660 -2140 7\n0 05 1 15 2 0 05 1 15 2\nIterations xto* Iterations xto4\n(a) SVHN (b) CIFARIO\nFigure 4: Training leaky RBM with different sampling algorithms.\nThe results are shown in Figure 4. The proposed sampling procedure is slightly better than typical\nCD steps. The reason is we only anneals the leakiness for 20 steps. To get accurate estimation\nrequires thousands of steps as shown in Section 4 when we estimate the partition function. There-\nfore, the estimated gradient is still inaccurate. However, it still outperforms the conventional CD\nalgorithm. On the other hand, unlike the binary RBM case shown in Tieleman (2008), PCD does\nnot outperform CD with 20 mixing steps for leaky RBM.\nThe drawback of Algorithm 2 is that sampling v from NV (J \u2014- WW')-'Wob, (I- WW\")-')\nrequires computing mean, covariance and the Cholesky decomposition of the covariance matrix in\nevery iteration, which are computationally expensive. We study a mixture algorithm by combin-\ning CD and the idea of annealing leakiness. The mixture algorithm replaces the sampling from\nN ((I-WW')-!Wo, (I \u2014 WW')-') with sampling from the empirical data distribution. The\nresulted mix algorithm is almost the same as CD algorithm while it anneals the leakiness over the\niterations as Algorithm 2. The results of the mix algorithm is also shown in Figure 4.\nIn this paper, we study the properties of the exponential family distribution produced by leaky RBM\nThis study relates the leaky RBM model and truncated Gaussian distribution and reveals an under.\nlying positive definite constraint of training leaky RBM. We further proposed a meta sampling algo:\nrithm, which anneals between leakiness during the Gibbs sampling procedure. We first demonstrate\nthe proposed sampling algorithm is significantly more effective and efficient in estimating the par:\ntition function than the conventional AIS algorithm. Second, we show that the proposed sampling\nalgorithm has comparatively better mixing properties (compared to CD). A few direction are wortl\nfurther study; in particular we are investigating on speeding up the naive projection step; either us:\ning the barrier function as shown in Hsieh et al. (2011) or by eliminating the need for projection by\nartificially bounding the domain via additional constraints.\n\u201cWe studied the PCD extension of the proposed sampling algorithm. However, the performance is not a\nstable as CD.\nIn this section, we compare two gradient approximation procedures. The baselines are the conven-\ntional contrastive divergence (CD) (Hinton, 2002) and persistent contrastive divergence (Tieleman,\n2008) (PCD). The second method is using Algorithm 2 (Leaky) with the same number of mixing\nsteps as CD. The experiment setup is the same as that of Section 4.\nThe mix algorithm is slightly worse than the original leaky algorithm, but it also outperforms the\nconventional CD algorithm without additional computation cost. The comparison in terms of CPU\nime is shown in Appendix F. Annealing the leakiness helps the mix algorithm explore different\nmodes of the distribution, thereby improves the training. The idea could also be combined with\nmore advanced algorithms (Tieleman, 2008; Tieleman & Hinton, 2009)*."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Y. Bengio. Learning deep architectures for ai. Found. Trends Mach. Learn., 2009.\nJ\u00e9rg Bornschein and Yoshua Bengio. Reweighted wake-sleep. In JCLR, 2015.\nY. Burda, R. B. Grosse, and R. Salakhutdinov. Accurate and conservative estimates of mrf log-\nlikelihood using reverse annealing. In AISTATS, 2015.\n\nD. E. Carlson, P. Stinson, A. Pakman, and L. Paninski. Partition functions from rao-blackwellized\ntempered sampling. In JCML, 2016.\n\nKyungHyun Cho, Tapani Raiko, and Alexander Ilin. Enhanced gradient for training restricted boltz-\nURCHIMUUG USITs LOVOLSe GuMCdilis. MAPA, LUI.\n\n). E. Carlson, P. Stinson, A. Pakman, and L. Paninski. Partition functions from rao-blackwellize\ntempered sampling. In JCML, 2016.\n\n\u201cyungHyun Cho, Tapani Raiko, and Alexander Ilin. Enhanced gradient for training restricted bolt:\nmann machines. Neural Computation, 2013.\n\n\\. Fischer and C. Igel. An introduction to restricted boltzmann machines. In CJARP, 2012.\n\n\u2019, Freund and D. Haussler. Unsupervised learning of distributions on binary vectors using two lay:\nnetworks. Technical report, 1994.\n\n. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, ar\nY. Bengio. Generative adversarial nets. In JCML. 2014.\n\n.. B. Grosse, C. J. Maddison, and R. Salakhutdinov. Annealing between distributions by averagir\nmoments. In N/PS, 2013.\n\n3. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Compute\ntion, 2002.\n\nx. E. Hinton. A practical guide to training restricted boltzmann machines. In Neural Network\nTricks of the Trade (2nd ed.). 2012.\n\n3. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neur\nComputation, 2006.\n\n.-J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estim:\ntion using quadratic approximation. In N/PS, 2011.\n\n). P. Kingma and M. Welling. Auto-encoding variational bayes. CoRR, 2013.\n\nI. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalab\nunsupervised learning of hierarchical representations. In JCML, 2009.\n\n). Liu, J. Peng, A. Ihler, and J. Fisher III. Estimating the partition function by discriminanc\nsampling. In UAI, 2015.\n\n\\. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoust\nmodels. In JCML Workshop on Deep Learning for Audio, Speech, and Language Processin,\n2013.\n\n/, Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In JCM\n2010.\n\n\\. Pakman and L. Paninski. Exact hamiltonian monte carlo for truncated multivariate gaussian\nJournal of Computational and Graphical Statistics, 2014.\n\nY. Parikh and S. Boyd. Proximal algorithms. Found. Trends Optim., 2014.\n\nA Ranvgatrn and MO B Uintan Niedaling nivel manne and feavarianeeac neoing fartarivead third ard,\nA. Fischer and C. Igel. An introduction to restricted boltzmann machines. In CIARP, 2012\nD. P. Kingma and M. Welling. Auto-encoding variational bayes. CoRR, 2013.\nN. Parikh and S. Boyd. Proximal algorithms. Found. Trends Optim., 2014.\nS. Ravanbakhsh, B. P\u00e9czos, J. G. Schneider, D. Schuurmans, and R. Greiner. Stochastic neura\nnetworks with monotonic activation functions. In AISTATS, 2016.\nR. Salakhutdinov and G. Hinton. Deep Boltzmann machines. In AISTATS, 2009\nA. Fischer and C. igel. An introduction to restricted boltzmann machines. In CIAKF, 2012.\n\nY. Freund and D. Haussler. Unsupervised learning of distributions on binary vectors using two layer\nnetworks. Technical report, 1994.\n\nI. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and\nY. Bengio. Generative adversarial nets. In JCML. 2014.\n\nR. B. Grosse, C. J. Maddison, and R. Salakhutdinov. Annealing between distributions by averaging\nmoments. In N/PS, 2013.\n\nG. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computa-\ntion, 2002.\n\nG. E. Hinton. A practical guide to training restricted boltzmann machines. In Neural Networks:\nTricks of the Trade (2nd ed.). 2012.\n\nG. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural\nComputation, 2006.\n\nC.-J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estima-\ntion using quadratic approximation. In N/PS, 2011.\n\nD. P. Kingma and M. Welling. Auto-encoding variational bayes. CoRR, 2013.\nH. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable\nunsupervised learning of hierarchical representations. In JCML, 2009.\n\nQ. Liu, J. Peng, A. Ihler, and J. Fisher II]. Estimating the partition function by discriminance\nsampling. In UAI, 2015.\n\nA. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic\nmodels. In JCML Workshop on Deep Learning for Audio, Speech, and Language Processing,\n2013.\n\nV. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In JCML,\n2010.\n\nA. Pakman and L. Paninski. Exact hamiltonian monte carlo for truncated multivariate gaussians.\nJournal of Computational and Graphical Statistics, 2014.\n\nN Parikh and S Rowvd Pravimal aloarithme Boynd Trends: Ontim 9014\nFor leaky RBM, the activation function of hidden units is defined as f(7;) = max(enj,j;), where\nc \u20ac (0,1) and yn; = an W;;v; + b;. The inverse function of f is f~!(hj;) = min(h;,h;/c).\nTherefore, the anti-derivatives are\nFrom Ravanbakhsh et al. (2016), the conditional distribution is defined as\nBy plugging F' and F* into (12), we get the conditional distribution for leaky RBM\n(hj |v) _ N (nj; 1)with ghj) = \u2014log(v2z), ifn; > 0\nPAN N(cnj,c)with g(hj) = \u2014log(v2em), if nj < 0.\nifn; >0\nelse,\nThe activation function of Gaussian visible units can be treated as the linear unit f (Y%) = %, where\ny= yo , Wijh;. Following the similar steps for deriving F and F*, we get the anti-derivatives\n\n1 1\n= 1y? and F*(v;) = }0?.\np(hy\\nj) = exp (\u2014njhy + F(nj) + F*(hy))"}, {"section_index": "6", "section_name": "\\.2. JOINT AND MARGINAL DISTRIBUTIONS", "section_text": "Given the conditional distributions p(v|h) and p(h|v), the joint distribution p(v, h) from the general\ntreatment for MRF model given by Yang et al. (2012) is\np(v,h) x exp [rm - Sr uj) + g(vi) Der *(hy) + 9(h \u00bb| :\n\ni=1 j=l\nI\n|W -Wllz = ||USV\" \u2014 OSV\" |p > 08a \u2014 Sis)\u2019,\ni=l"}, {"section_index": "7", "section_name": "D NECESSITY OF THE PROJECTION STEP", "section_text": "From Figure 5, the model trained by weight decay without projection step is suffered by the probler\nof the diverged values. It confirms the study shown in Section 3.1. It also implies that we canno\np(v,h) x exp | v' Wh\u2014\n\nhn? h2\nJj 4 / j4 / T\n) ( a+ log m7 (z + log z=) +bih\n\n1; <0\npiv) \u00ab J, Permian\n\n2 hy h2\nx bot lvl \u2018i [[ nie \u2014 log V2 \") Il (-2 1m \u2014 ioe v0e) dh\nh c\nnj >0\n\n2\nx exp -3v Ce \u2014 So WW -\u00a2 So WW) | ot SO Who te So vWiv\n\nnj >O nj <0 nj>0 nj <0\n\nSy\n\nlr\n\nR\n> Since WWT \u2014 aj; Wj Wj = (1-05) Wj Wj! = 0, we have WWT > YD, WW).\nfore, I \u2014 0, aj;WiWj =1-WW! 0.\nWe conduct a short comparison to demonstrate the projection step is necessary for the leaky RBM\non generative tasks. We train two leaky RBM as follows. The first model is trained by the same\nsetting in Section 4. We use the convergence of log likelihood as the stopping criteria. The second\nmodel is trained by CD-1 with weight decay and without the projection step. We stop the training\nwhen the reconstruction error is less then 1072. After we train these two models, we run Gibbs\nsampling with 1000 independent chains for several steps and output the average value of the visible\nunits. Note that the visible units are normalized to zero mean. The results on SVHN and CIFAR10\nare shown in Figure 5.\nAverage of Visible Units (log scale)\n\na\n\nx\n\no\n\nn>\n\n\u00b0\n\n\u00a9\n\n3 Weight Decay\nK\u20ac Projection \u201c\n\nOo\n@- 9 OR HH HK\n\n\u00a9 Weight Decay\n> Projection\n\nAverage of Visible Units (log scale)\n\nPa\n\n\u00b0|\n\n20 40 60\nGibbs Sampling Iterations\n\n(a) SVHN\n\n80 100\n\n1 2 3 4 5\nGibbs Sampling Iterations x104\n\n(b) CIFARIO\nFigure 5: Divergence results on two datasets.\ntrain leaky RBM with larger CD steps when we do not do projection; otherwise, we would have the\ndiverged gradients. Therefore, the projection is necessary for training leaky RBM for the generative\npurpose. However, we also oberseve that the projection step is not necessary for the classificatior\nand reconstruction tasks. he reason may be the independency of different evaluation criteria (Hinton\n2012; Theis et al., 2016) or other implicit reasons to be studied.\nWe analyze the performance gap between AIS-Leaky and AIS-Energy. One major difference is the\ninitial distribution. The intermediate marginal distribution of AIS-Energy has the following form:\npr(v) x exp (}\" ( \u2014(1\u2014 Br) Ss WjW;' \u2014(1\u2014 Bre Ss nin) 7 .\n\nnj>0 nj SO\nTo address the higher bias problem of AIS-Energy, we replace the initial distribution with the on\nused in Algorithm 2. By elementary calculation, the marginal distribution becomes\n77\n(v) ( pr (1-3 WjW;' \u2014 (Be + (1 \u2014 Bue) S> WiW; ) )\npr(v) cexp | \u2014-=\n\nnj <0\nng>0"}, {"section_index": "8", "section_name": "F.1 SAMPLED IMAGES", "section_text": "We show the sampled images from leaky RBM train on CIFAR10 and SVHN datasets. We randomly\ninitialize 20 chains and run Gibbs sampling for 1000 iterations. The sampled results are shown in\nFigure 6 The results shows that single layer RBM does not adequately model CIFAR10 and SVHN\nwhich recovers the proposed Algorithm 2. From this analysis, we understand AIS-Leaky is a special\ncase of conventional AIS-Energy with better initialization inspired by the study in Section 3. Also,\nby this connection between AIS-Energy and AIS-Leaky, we note that AIS-Leaky can be combined\nwith other extensions of AIS (Grosse et al., 2013; Burda et al., 2015) as well.\nFigure 7: Sampled images in gray-scale from Bernoulli-Gaussian RBM trained on CIFAR10 (Ran-\nzato & Hinton, 2010).\nwhen compared to multilayer models. The similar results for single layer Bernoulli-Gaussian RBM\nfrom Ranzato & Hinton (2010) (in gray scale) is shown in Figure 7. Therefore, we instead focused\non quantitative evaluation of the log-likelihood in Table 3."}, {"section_index": "9", "section_name": "F.2. COMPUTATIONAL TIME BETWEEN DIFFERENT SAMPLING STRATEGIES", "section_text": "The comparison in terms of CPU time of different sampling algorithms discussed in Section 5 is\nshown in Figure 8. Please note that the complexity of CD and Mix are the almost the same. Mix\nonly need a few more constant time steps which can be ignored compared with sampling steps.\nLeaky is more time-consuming because of computing and decomposing the covariance matrix as we\ndiscussed in Section 5. We also report the execution time of each step of algorithms in Table 4."}, {"section_index": "10", "section_name": "F.3. STUDY ON RELU-BERNOULLI RBM", "section_text": "We study the idea of annealing leakiness on the RBM model with leaky ReLU hidden units anc\nBernoulli visible units. We create the toy dataset with 20, 25 and 30 visible units as shown i\nFigure 9. The small datasets allow exact computation of the partition function. For each dataset, w\nsample 60,000 images for training and 10,000 images for testing. We use 100 hidden units and PCL\nto train the model. The log likelihood results are shown in Table 5.\nCompared to the Gaussian visible units case we study in Section 3, where p(v) is a multi-variate\nGaussian distribution when c = 1, the partition function of p(v) in ReLU-Bernoulli when c = 1\ndoes not have the analytical form. Therefore, we do the following two-stage alternative. We first\nrun the standard AIS algorithm, which anneals the energy, to the distribution with leakiness c = 1.\nWe then change to anneals the leakiness from 1 to the target value. For the typical AIS algorithm\n(AIS-Energy), we use 10+ chains with 2 x 10+ intermediate distributions. For the proposed two-\nstaged algorithm (AIS-Leaky), we use 10* chains with 104 intermediate distributions for annealing\nto c = 1 and the other 10* distributions for annealing the leakiness. The results are shown in Table 6\nIn Table 6, the standard AIS algorithm (AIS-Energy) has unsatisfactory performance. We show\nthe performance of AIS for estimating the partition function of models with different leakiness or\nToy20. We use the 104 independent chains and 2 x 10* intermediate distributions. The results are\nshown in Table 7. From Table 7, we observe that the AIS performances worse when the leakines:\nis closer to 0. Although we observed that increasing chains and intermediate distributions could\nimprove the performance, but the improvements are limited. The study demonstrates when the\nTable 4: The execution time (s) of each step of algorithms (1000 iterations)\n\u2018igure 6: Sampled images from leaky RBM\n1520 -2000\n\nUp 2\n1540 ed -2020 8\n-26 2\nBe _&\n1560 Poe -2040 gZ ia\n2 -\n8 a 8 Be\n2 -1580 228 2 2080 eX\ng Je z oa\n3 -1600 Aa oi -2080 36\nD - D 7\ns cad % Ss LS *-\n1620 a -** \u2014 .2100 we e--77\n\u201c e--7 \u00ae ee \u00abfeb\n-1640 -2120 > Mix\n-# Leaky\n-1660 -2140\n0 4000 6000-8000 4000 6000 += 8000-10000\n\nRunning Time (s)\n\n(a) SVHN\n\nRunning Time (s)\n\n(b) CIFARIO\nFigure 8: Training leaky RBM with different sampling algorithms.\nmLl_ fk\n\n(a) I = 20\n\na ee\n\n(b) I = 25\n\nmello BS. on\n\n(c) 1 = 30\nTable 5: The log lokelihood and true partition function for ReLU-Bernoulli RBM with differen\nnumber of visible units.\nTable 6: The difference between the true partition function and the estimations of two algorithm\nwith standard deviation.\nnon-linearity of the distribution increases (the leakiness value c decreases), the standard AIS canno\neffectively estimate the partition function within feasible computational time. On the other hand, i\nalso confirm the proposed idea, annealing the leakiness, can serve as an effective building block for\nalgorithms without enhancing the algorithm complexity. Note that the unsatisfactory performance\nof AIS may be addressed by Grosse et al. (2013). From Appendix E, the two-stage algorithm usec\nhere can also be improved by applying Grosse et al. (2013).\nTable 7: The difference (with standard deviation) between the true partition function and the esti-\nmations of AIS-Energy under different leakiness."}, {"section_index": "11", "section_name": "F.3.1| MNIST AND CALTECH DATASETS", "section_text": "We study MNIST and Caltech 101 Silhouettes datasets with 500 hidden units and train the mode\nwith CD-25. The results are shown in Table 8 and Table 9. The leaky RBM is better than con.\nventional Bernoulli RBM and some deep models on MNIST data. Although leaky RBM deos no\noutperform Su et al. (2017), but it enjoys the advantage of the simpler sampling procedure (Gaussiar\ndistribution vs truncated Gaussian distribution) in the binary visible unit case.\nTable 9: The testing log-likelihood result on Caltech 101 Silhouettes\nTable 8: The testing log-likelihood result on MNIST."}]
HyenWc5gx
[{"section_index": "0", "section_name": "REPRESENTATION STABILITY AS A REGULARIZER FOR\nIMPROVED TEXT ANALYTICS TRANSFER LEARNING", "section_text": "Matthew Riemer, Elham Khabiri, and Richard Goodwin\nAlthough neural networks are well suited for sequential transfer learning tasks, the\ncatastrophic forgetting problem hinders proper integration of prior knowledge. In\nthis work, we propose a solution to this problem by using a multi-task objective\nbased on the idea of distillation and a mechanism that directly penalizes forget-\nting at the shared representation layer during the knowledge integration phase of\ntraining. We demonstrate our approach on a Twitter domain sentiment analysis\ntask with sequential knowledge transfer from four related tasks. We show that our\ntechnique outperforms networks fine-tuned to the target task. Additionally, we\nshow both through empirical evidence and examples that it does not forget useful\nknowledge from the source task that is forgotten during standard fine-tuning. Sur-\nprisingly, we find that first distilling a human made rule based sentiment engine\ninto a recurrent neural network and then integrating the knowledge with the target\ntask data leads to a substantial gain in generalization performance. Our experi-\nments demonstrate the power of multi-source transfer techniques in practical text\nanalytics problems when paired with distillation. In particular, for the SemEval\n2016 Task 4 Subtask A (2016) dataset we surpass the state of the\nart established during the competition with a comparatively simple model archi-\ntecture that is not even competitive when trained on only the labeled task specific\ndata."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Sequential transfer learning methodologies leverage knowledge representations from a source task\nin order to improve performance for a target task. A significant challenge faced when transferring\nneural network representations across tasks is that of catastrophic forgetting (or catastrophic inter-\nference). This is where a neural network experiences the elimination of important old information\nwhen learning new information. The very popular strategy of fine-tuning a neural network involves\nfirst training a neural network on a source task and then using the model to simply initialize the\nweights of a target task network up to the highest allowable common representation layer. However\nit is highly susceptible to catastrophic forgetting, because in training for the target task it has no ex-\nplicit incentive to retain what it learned from the source task. While one can argue that forgetting the\nsource task should not matter if only the target task is of interest, our paper adds to the recent empir-\nical evidence across problem domains (Li & Hoiem| {2016),(Rusu et al.|/2016) that show additional\nnetwork stability can lead to empirical benefits over the fine-tuning algorithm. It seems as though\nfor many Deep Learning problems we can benefit from an algorithm that promotes more stability\nto tackle the well known stability-plasticity dilemma. One popular approach for addressing this\nproblem is rehearsals Murre| 1992), (Robins} 1995). Rehearsals refers to a neural network training\nstrategy where old examples are relearned as new examples are learned. In the transfer setting it can\nbe seen as related to multi-task learning where two tasks are trained at the same\ntime, rather than sequentially, while sharing input encoder to a shared hidden represen-\ntation. However, in rehearsals the representation is biased in favor of the source task representation\nthrough initialization. This technique is very sensible because while fine-tuning is susceptible to\ncatastrophic forgetting, multi-task learning is not (Caruanal 1997)."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "One of the biggest issues with the standard rehearsals paradigm is that it requires a cached mem\nory of training examples that have been seen in the past. This can be a massive requirement a\nthe number of source tasks and training data sizes scale. One compelling technique for addressin;\nthis problem is the concept of pseudorehearsals Robins [1595p, (1996), where relearning i\nperformed on an artificially constructed population of pseudoit stead of the actual old exam\nples. Unfortunately, current automatic techniques in the text analytics domain have not yet mastere\nproducing linguistically plausible data. As such, the pseudorehearsals paradigm is likely to wast\ncomputational time that could be spent on learning realistic patterns that may occur during testing\nIn our work, we extend the Learning without Forgetting (LWF) paradigm of 2\nthe text analytics domain using Recurrent Neural Networks. In this approach, the target task data i\nused both for learning the target task and for rehearsing information learned from the source task b\u2019\nleveraging synthetic examples generated for the target task input by the model that only experience\ntraining on the source task data. As argued by (2016), this setup strikes an importan\nbalance between classification performance, computational efficiency. and simplicity in deployment\nRegardless of whether they are applied to real source task examples, real target task examples,\nor synthetic examples, paradigms in the style of rehearsals all address the shortcomings of neural\nnetwork forgetting by casting target task integration as a multi-task learning problem. However,\nthis is not quite the purpose of the multi-task learning architecture, which was designed for joint\nlearning of tasks from scratch at the same time. The key disconnect is that in multi-task learning, the\ntransformation from the shared hidden layer to the outputs for each task are all learned and updated\nwith the changing hidden representation. This would imply that, in the framework of rehearsals, it\nis possible for there to be significant changes during learning of the network\u2019s representation, and\nthus its abilities on the source task itself. While it would be desirable to claim we were allowing\nour source task network to become even better based on the target task than it was before, this\nmotivation seems idealistic in practice. One reason this is idealistic is because multi-task learning\ngenerally only works well when tasks are sampled at different rates or alternatively given different\npriority in the neural network loss function As a result, it is most likely that\nauxilirary source tasks will receive less priority from the network for optimization than the target\ntask. Additionally, we observe in our experiments, and it has been observed by others in (Rusu\nthat it is generally not possible to distill multiple complex tasks into a student network\nat full teacher performance for all tasks. This seems to imply the degradation of the source task\nperformance during training is somewhat inevitable in a multi-task learning paradigm.\nWe address this issue with our proposed forgetting cost technique. We demonstrate that it, in fact\ncan be valuable to keep the hidden to output transformation of the source tasks fixed during knowl\nedge integration with the target task. This way, we impose a stronger regularization on the hiddet\nrepresentation during target task integration by not allowing it to change aspects that were importan\nto the source task\u2019s performance without direct penalization in the neural network\u2019s loss function\nWe demonstrate empirically both that freezing the source task specific weights leads to less deterio\nration in the accuracy on the source task after integration, and that it achieves better generalizatiot\nperformance in our setting. The forgetting cost is practical and easy to implement in training an\nkind of neural network. In our experiments, we explore application of the forgetting cost in a re\ncurrent neural network to the three way Twitter sentiment analysis task of SemEval 2016 Task -\nSubtask A and find it to achieve consistently superior performance to reasonable baseline transfe\nlearning approaches in four examples of knowledge transfer for this task.\nWe also demonstrate how powerful distillation can be in the domain of text analytics when paired\nwith the idea of the forgetting cost. Significantly, we show that a high quality gazetteer based logical\nrule engine can be distilled using unlabeled data into a neural network and used to significantly im-\nprove performance of the neural network on the target task. This is achieved with a novel extension\nof the LwF paradigm by|Li & Hoiem| (2016) to the scenario of a source task with the same output\nspace as the target task. This can be a very promising direction for improving the ability of humans\nto directly convey knowledge to deep learning algorithms. Indeed, a human defined rule can contain\n\nfar more information than a sing\n\ne training example, as that rule can be projected on to many unla-\n\nbeled examples that the neural network can learn from. This is the reason human teachers generally\nbegin teaching human students tasks by going over core rules at the onset of learning. Moreover,\nwe showcase that multiple expert networks trained on the target task with prior knowledge from\ndifferent source tasks can be effectively combined in an ensemble and then distilled into a single\n\nGRU model (Cho et al.}{2014),\n\nChung et al.|\n\n2014). Leveraging this combination of distillation\nand knowledge transfer techniques allows us to achieve state of the art accuracy on the SemEval\ntask with a model that performs 11% worse than the best prior techniques when trained only on the\nlabeled data.\nSince the work of (Bucilu et al.}/2006) and (Hinton et al.}/2015) showed that an ensemble of neural\n\nnetwork classifier can be distilled into a single model, knowledge distillation from a teacher network\nto a student network has become a growing topic of neural network research. In\n) it was shown that a deep teacher neural network can be learned by a shallow student network.\nThis idea was extended in (2014), where it was demonstrated that a deep and nar-\nrow neural network can learn a representation that surpasses its teacher. The use of distillation as\na means of sharing biases from multiple tasks was explored in (Lopez-Paz et al.| /2016), where the\nteacher network is trained with the output of the other tasks as input. It is not obvious how to extend\na recurrent neural network to best use this kind of capability over a sequence. The idea of distill-\ning from multiple source task teachers into a student network was highlighted in the reinforcement\nlearning setting in (Rusu et al.|/2015). Additionally, the concept of using distillation for knowledge\ntransfer was also explored in (Chen et al. (2015), where function preserving transformations from\nsmaller to bigger neural network architectures were outlined. This technique could also provide\nvalue in some instances for our approach where wider or deeper neural networks are needed for the\ntask being transferred to than was needed for the original task. Distillation over target task data was\nfirst proposed as a means of elevating catastrophic forgetting in sequential knowledge transfer as ap-\nplied to image classification in ). We extend this approach for its first application\nto our knowledge for text analytics problems, with a recurrent neural network architecture, and in\nthe setting where the source task and target task have the same output. The chief distinction of our\nproposed forgetting cost is that source task specific parameters are held fixed during integration with\nthe target task as opposed to the joint training of all parameters used by{Li & Hoiem|(2016). Our ex-\nperiments empirically support the intuition that freezing these parameters leads to greater retention\nof conrce task nerformance after taroet tack inteoration and hetter ceneralization to the taroet tack\nAn ensemble over multiple diverse models trained for the same sentiment analysis task was also\nonsidered in for the IMDB binary movie reviews sentiment dataset (Maas\nt al.||2011). We tried this ensemble model in our work and found that it gave very limited improve-\nnent. Our ensemble technique learns a more powerful weighted average based on the soft targets\n\u00bbf each task and a multi-step greedy binary fusion approach that works better for the Twitter senti-\nnent analysis task in our experiments. Knowledge transfer from multiple tasks was considered to\nstimate the age of Twitter users based on the content of their tweets in .\n-xperimented with the hidden layer sharing approach outlined in that work and found that even when\nising just a single softmax combining layer, it would overfit on our limited training and validation\nlata. Progressive neural networks is a recently proposed method very similar ir\nnotivation to our forgetting cost as it is directly trying to solve the catastrophic forgetting problem.\nThe idea is that learned weight matrices relate the fixed representations learned on the source task\no the construction of representations for the target task. In our experiments, the progressive neural\n1etworks approach consistently fails to even match the results achieved with fine-tuning. We hy-\nyothesize that although using fixed representations to aid learning addresses catastrophic forgetting.\nt suffers from the curse of dimensionality. As such, when training data is relatively small given the\n-omplexity of the task, it is prone to overfitting as it effectively increases the input dimension size\nhrough shared fixed representations.\nThe combination of logic rules and neural networks has been explored in a variety of different archi-\nectures and settings. These neural-symbolic systems (Garcez et al.| |2012) include early examples\nsuch as KBANN (Towell et al, that construct network architectures from given rules to per-\nform reasoning. ( . ry recently also looked at the problem of distilling logical rules\ninto a neural network text analytics classifier. However, our approach is much more generic as it can\nbe applied to integrate knowledge from any kind of pre-made classifier and treats the rule engine as\na black box. In (H 6) they consider the individual rules and leverage an iterative convex\noptimization algorithm alongside the neural network to regularize the subspace of the network. In\nour work we demonstrate that, by guarding against catastrophic forgetting, it is possible to efficiently\nleverage rules for transfer by utilizing a generic sequential knowledge transfer framework. We do\nnot need to make any modification to the architecture of the neural network during testing and d\nnot need iterative convex optimization during training.\nIn the sequential knowledge transfer problem setting explored in this paper, training is first con-\nducted solely on the source task examples S, including K\u2019g training examples (2;, ysi) \u20ac S where\nxg; is the input representation and yg; is the output representation. After training is complete on S,\nwe would like to now use prior knowledge obtained in the model trained on S' to improve general-\nization on a new target task with examples T, which includes Kr training examples (x7, yri) \u20ac T.\nHere we assume that the input representations x5; and x7; are semantically aligned in the same rep-\nresentation space. As such, if there is useful knowledge in S that applies in some direct or indirect\nway to the target task that is not present in T, we would expect a good knowledge integration ap-\nproach to generalize better to the target task than it is possible to using the training data in T alone.\nStrong performance for the sequential knowledge transfer problem is a first step towards the greater\ngoal of a mechanism for effective lifelong learning (Thrun|{1996)."}, {"section_index": "3", "section_name": "3.2 FORGETTING COST FOR TUNING A TARGET TASK MODEL", "section_text": "where L is some loss function (we use mean squared error in our experiments) and y;niz is the sof\nlabel generated for the target task input x; based on the model after training just on S. The mode\ntrained just on S is also used to initialize the weights of the target task model before integratiot\nwith T as we do in the standard fine-tuning paradigm. a is a hyperparameter that can be utilized t\ncontrol the extent of allowed forgetting. Of course, a very similar way to express this idea would b\nto mix synthetic training examples T\u2019 with the same input as T and output generated by the mode\ntrained just on S with the true target task training examples T. In this case, the mixing rate of th\nteacher generated training examples is analogous to our forgetting parameter ay determining th\nprioritization. These techniques perform quite similarly in our experiments, but we actually fin\nthat the formulation in equations [T]and perform slightly better on the test set. For example, thi:\nformulation is superior by 0.4% accuracy in tuning a distilled representation of a logical rule engine\nWe conjecture that learning tasks in the same gradient step when they are related to the same inpu\ndata results in slightly less noisy gradients."}, {"section_index": "4", "section_name": "3.3. FORGETTING COST FOR KNOWLEDGE TRANSFER FROM A RELATED TASK", "section_text": "The assumption in section [3.2] that the output of the source task data S' should be in the same rep-\nresentation space as the output for the target task data T is quite a big one. It rules out the vast\nmajority of knowledge sources that we can potentially leverage. As such, we propose an extension\nthat does not make this restriction for application in sequential knowledge transfer of tasks that are\nnot directly semantically aligned. We update our model to include another predicted output separate\nfrom 7:\nYinit = finit(WfixeaNsnarea + b fixed)\nwhere Yjniz is a predicted output attempting to recreate the soft labels of the original model trained\njust on S. fini is the non-linearity used in the final layer of the source task model. Weight matrix\nWr and bias b+;,-4 are taken from the final layer of the source task model and are not updated\nThe most straightforward application of our proposed forgetting cost paradigm is for the case of\nintegrating a neural network that has been trained on source task data S, which has outputs in the\nsame representation space as the outputs for the target task data T. In this case, the forgetting cost\namounts to the addition of a regularization term in the objective function during the integration phase\nwhen we train using 7\u2019. This promotes the neural network to be able to recreate the soft labels of the\ninitialized model found after training on S before integration is started with 7\u2019. More formally:\nLoss = L(y, 9) + af L(yinit, 9)\nduring integration with the target task data T. As a result, the loss function is updated from section\nwhere the hidden state is shared between both terms in the objective function. Up to the shared hid.\nden layer, we initialize the model for the target task with the weights learned just using S. Randon\nmatrices and bias vectors are now used to initialize the prediction of y based on the shared hidder\nrepresentation. This can be seen as a weak form of restricting the model parameters that can be\nuseful for regularization. The hidden representation is in effect constrained so that it is promotec\nnot to change in key areas that have a large effect on the output vector of the source task model. Or\nthe other hand, there is little regularization for parameters that have little effect on the output vecto:\nfor the source task model."}, {"section_index": "5", "section_name": "4 RECURRENT NEURAL NETWORK MODEL", "section_text": "In recent years, recurrent neural network models have become a tool of choice for many NLP tasks.\nIn particular, the LSTM variant has become popular as it allevi-\nates the vanishing gradients problem (Bengio et al.|{1994) known to stop recurrent neural networks\nfrom learning long term dependencies over the input sequence. In our experiments we use the sim-\n\npler GRU network (Cho et al.}/2014}, that generally achieves the same accuracy\n\ndespite a less complex architecture. Each time step \u00a2 is associated with an input x; and a hidden\nstate h,. The mechanics of the GRU are defined with the following equations:\nat = W\\Wazlt T Whzlt\u2014-1)\nre = O(Wort, + Wrrhi-1)\nhy = tanh(Wenate +171 \u00b0 Whnhe-1)\nIe = 21 Oya + (1\u2014%) ohe\nwhere o denotes an element-wise product. W,., Wz,, and W,\u00bb represent learned matrices that\nproject from the input size to the hidden size. W),,, Wp,, and W),, represent learned matrices\nthat project from the hidden size to the hidden size. In our work we evaluate the GRU in the\ncategorical prediction setting. For each document, the hidden state after the last word hy is used\nfor the prediction \u00a5 of the label y. As such, we treat hy, as the shared hidden representation Agpared\nfrom section[3.3]for our experiments.\nThe prediction goes through one other non-linear function f after the final hidden state is derived\nIn our experiments we use the softmax function, but others are useful in different settings. A mode\nthat builds on top of GRUs with an external memory storage paradigm current)\nholds the state of the art on movie review sentiment analysis. However, we focus just on the straight\nforward single layer GRU model in our experiments so that we can more easily disentangle factor:\nof influence on performance. Our GRU model was fed a sequence of fixed 300 dimensional Glove\nvectors (Pennington et al.|{2014), representing words based on analysis of 840 billion words from\ncommon crawl of the internet, as the input 2, for all tasks. It has been shown in a number of paper:\nthat tuning the word embeddings during training could increase performance, and it is possible ow\napproach could have performed better had we done so.\nOur neural network models were implemented in Theano (Theano Development Team] |2016) anc\ntrained with Stochastic Gradient Descent. As we did not use an advanced optimization method anc\nLoss = L(y, 9) + afL(Yinit; Jinit)\n2 = 0(Weza14 + Whrzhi-1)\nre = O(Wort, + Wrrhi-1)\n4 = tanh(Wenve +1720 Wrnhe-1)\nhe=u0hm-1+1\u2014-x)oh\ny = f(Wynhr + by)\nnoticed run to run variation in performance, for all of our transfer learning models we trained 1(\nparallel versions and chose the one with the highest validation accuracy. The SemEval 2016 Task 4\nSubtask A training set consists of 10,000 total training examples, but we were only able to receive\n8,906 because of tweet removals when we used the downloading script. For the target task dat\nacross our experiments, 7,600 examples of the SemEval training set examples were used for training\nand the rest for validation. The GRU model achieves only 53.6% accuracy on the SemEval testing\ndata when just training with the target task data and random initialization. In order to improve, we\nconsider knowledge transfer from GRUs trained for the following source tasks to the SemEval targe\ntask data:\nDistilling Logical Rules: Knowledge distillation can be performed using teacher models that are\nvery different in structure than their neural network based student models. We demonstrate with this\ntask that a compilation of logical linguistic rules can be used as an effective teacher for a GRU by\nhaving the GRU attempt to create the output of the rule engine generated over unlabeled in domain\ndata. Specifically, our gazetteer based logical rule engine separates sentences and phrases in the text.\nIt then applies dictionaries of positive and negative sentiment words and phrases to the corresponding\ntext. For each positive or negative phrase found, it checks to see if negation or double negation are\napplied, and modifies the polarity of the sentiment accordingly. The result for any piece of text is\na count of positive and negative sentiment occurrences. For this task, we simply count the total\nnumber of positive and negative indicators to give an overall positive, negative or neutral score. We\nprovide addition details on how we mapped rules to soft targets for the student network to recreate in\nAppendix|A| We utilized a GRU model with 50 hidden units and 50,000 unlabeled examples for our\nsource task model. We distill off the soft labels as in (Hinton et al.|{2015), but set our temperature\nfixed at 1.0. It is possible that our performance could have improved by tuning this parameter.\nAdditional details about the selection of the network and data size are included in Appendix [B}\nThe logical rule model itself achieves 57.8% accuracy on the SemEval testing data and the rules\ndistilled into a GRU as explained in section|4|achieves 58.9% accuracy before any integration with\nthe SemEval target task data. We leverage this task for comparison of knowledge transfer techniques\nwhen the source task and target task share an output space as discussed in section|3.2]\nBinary Movie Reviews: For knowledge transfer from related tasks as discussed in section 3.3] we\nfirst consider the Stanford Sentiment Treebank (Socher et al. 2013), which is a popular sentiment\ndataset based on the movie review domain. We consider one source task to be the binary (positive.\nand negative) sentence level sentiment subtask which contains 6,920 training examples, 872 valida-\ntion examples, and 1,821 testing examples. Our GRU model with 40 hidden units achieves 85.5%\naccuracy on this task.\nFive Class Movie Reviews: We also consider another source task leveraging the Stanford Sentiment\nTreebank data from the fine grained (very positive, positive, neutral, negative, and very negative)\nsentence level sentiment substask which contains 8,544 training examples, 1,101 validation exam-\nples, and 2,210 testing examples. We use a GRU model with 200 hidden units to accommodate fot\nthe increased task complexity and achieve 45.9% accuracy. This fine grained model can actually be\nassessed directly on the SemEval task by projecting from five classes to three classes, but it only\nachieves 44.2% accuracy with no tuning on the target task data. Our performance on these twc\n\nmovie review source tasks is quite similar to what was reported in when using <\nsimilar setup, but with LSTMs for both subtasks.\nEmoticon Heuristic: Finally, we consider a semi-supervised task based on emoticon prediction mo\ntivated by the successful work in (Go et al.|[2009), leveraging it in the twitter sentiment domain an\nits use as a vital component of the SemEval competition winning system 2016). W\nfind unlabelled tweets that contain smileys, frowns, or laughing emoticons. We remove emoticon\nfrom the tweet before prediction and compile a dataset of 250,000 training examples, 50,000 vali\ndation examples, and 100,000 testing examples for each of the three classes. This is multiple order\nof magnitude smaller than the 90 million tweets used in 6) to allow for quic!\nexperimentation. Our GRU model with 50 hidden units achieves 63.4% accuracy on the emotico!\nprediction test set.\nFine-Tuning: The representation is simply initialized with the representation found after training on\nthe source task and then trained as usual on the target task. This approach was pioneered in (Hinton\n(2006), in application to unsupervised source tasks and applied to transfer learning\nin (Bengio et al.|/2012), and (Mesnil et al.). The learning rate is tuned by a grid search based on the\n\nvalidation set performance.\nProgressive Networks: We also compare with our implementation of a progressive neural networl\n(Rusu et al.||2016), where the representation learned for the source task is held fixed and integrate\nwith a target task specific model via lateral connections trained using the target task data. Th\nlearning rate is also tuned based on a grid search using the validation set.\nForgetting Cost: Finally, we compare each baseline model with our proposed forgetting cost de\nscribed in section|3| The learning rate as well as ay from equations[I] and [3] were tuned by a gric\nsearch based on the validation set performance.\nOur experimental results on the SemEval data validate our intuition that the forgetting cost shoul\nead to stronger regularization and better generalization performance. One thing to note about ou\norogressive neural networks implementation is that it effectively has only one hidden layer, becaus\nve hold our embeddings fixed during model training and the same embeddings are shared amon,\nhe models used for all of the tasks. It is possible that having multiple layers of lateral connec\nions is important to achieving good performance. However, this setting was not applicable in ou\n>xperiments. Our results for sequential knowledge transfer on the SemEval benchmark are quit\nsncouraging as the forgetting cost outperforms baselines significantly in all cases.\nWe additionally have validated the intuition that equation [I|should perform stronger regularization\nthan equation [3] when equation [I] is applicable. In fact, for our distilled logical rule model tuning\nexperiments, we found that equation{I|performs 3% better on the test set. In an attempt to understand\nmore about what caused this performance difference, we monitored testing set performance at each\nepoch and noticed that equation Bis actually prone to overfitting away from a good solution on the\ntest set. However, it often finds a pretty good one comparable to equation [Tearly in training. When\nequation [I] could be applied, it seems to be a useful regularization to constrain both the hidden\nlayer and the output layer to align with the model learned on the source task. In equation [3] the\nWe consider multiple sequential knowledge transfer algorithms for experimental comparison. Each\nuses only the source task data for learning the source task and only the target task data for integrating\nwith the target task. This way integration is fast and simple, because it does not incorporate storage\nand replay of examples from the potentially very large source task as argued in (Li & Hoiem|/2016).\nLearning without Forgetting (LW): In the LwF paradigm, joint training 1s performed after pa-\nrameter initialization. This is achieved by treating the target task data and the output generated by\nthe source task model based on the target task input data as two jointly learned tasks as in (C:\n1997). As opposed to our proposed forgetting cost, the source task specific parameters are not held\nfixed while training on the target task data. The learning rate and mixing rate between the tasks are\ntuned by a grid search based on validation set performance. We first consider a version of the LwF\nmodel that leverages a random initialization of the target task specific parameters and initialization\nof all parameters learned on the source task with the learned values. We also consider another for-\nmulation that we call Greedy LwF. This is actually more closely aligned with the original paper i\n\nHoiem . All source task parameters are first held fixed, and the target task specific param-\neters are learned alone before joint training with all of the parameters unfrozen as a second step.\nFor the case of source tasks with output in the space of the target task output, there are no source\ntask specific parameters, so the forgetting cost can be viewed as a viable interpretation of the LwF\nparadigm appropriate in that setting.\nWe empirically evaluate the generalization performance of the forgetting cost for sequential knowl-\nedge transfer from four different source tasks in Table[I]and Table[2| The source task considered in\nTable|I]is distilling a logical rule model, leveraging the technique outlined in equation|T] In Table[2}\nwe leverage the forgetting cost for related task knowledge transfer as outlined in equation|3}\nhidden to output transformation learned for the target task can in contrast learn to deviate from tl\ntransformation learned for the source task.\nIn Table [3] we explore the retention of empirical performance on the source task for knowledge\ntransfer algorithms after integration with the target task is complete. Apparently in these cases,\nallowing relearning of the source task model during integration with the target task data is indeed\ndestructive to source task performance. LwF outperforms Fine-Tuning significantly in knowledge\nretention for movie reviews, but interestingly does not for the emoticon heuristic. The effect of the\ngreedy target task initialization strategy also appears inconsistent. It seems it is possible that this\ngreedy initialization could improve our proposed forgetting cost paradigm in some cases as well.\nHowever, a rigorous analysis of the tradeoffs for this initialization approach is beyond the scope of\nthis paper."}, {"section_index": "6", "section_name": "5.5 INSPECTION OF LEARNED REPRESENTATIONS", "section_text": "Now that we have established the empirical benefits of our proposed forgetting cost, we will demon-\nstrate what it achieves qualitatively through examples. In Table|4]we include a sample of examples\nthat are predicted correctly by transferring the knowledge source with the forgetting cost paradigm\nand not with fine-tuning based integration. The effect is, perhaps, easiest to understand for the rule\nbased and movie review based transfer scenarios. For the rule based transfer setting you can liter-\nally map insights that are not forgotten to their respective logical rule in the model, as is the case\nin these examples. Moreover, we can see movie domain specific terminology such as May the\nforce be with\u201d is seemingly forgotten with standard fine-tuning, but not when the forgetting cost\nregularization is applied.\nTable 3: Evaluation of accuracy on the source task after integration with the target task data of\nSemEval 2016 Task 4 Subtask A. The accuracy after only source task training prior to integration\nwith the target task is included for reference as a baseline.\nTable 1: Evaluation of target task tuning methodologies for a distilled rule model to the task of\nSemEval 2016 Task 4 Subtask A.\nSource Task Fine-Tuning | Progressive Networks | LwF | Greedy LwF | Forgetting Cost\nBinary Movie Reviews 57.3% 54.5% 58.1% 58.8% 59.7%\nFive Class Movie Reviews 5TA% 54.6% 57.1% 56.6% 58.2%\nEmoticon Heuristic 55.8% 53.2% 57.7% 56.7% 58.6%\nTable 2: Evaluation of knowledge transfer from three source tasks to the task of SemEval 2016\nTask 4 Subtask A.\nAs the source task representation is literally stored fixed as part of the target task representation in\nprogressive neural networks, it is not clear how to assess any effective forgetting of the source task\nduring target task integration. As a result, we omit them from our source task forgetting experiments.\nSource Task Fine-Tuning | LwF | Greedy LwF | Forgetting Cost || Source Only\nBinary Movie Reviews 80.7% 81.3% 81.5% 83.3% 85.5%\nFive Class Movie Reviews 41.6% 42.8% 43.1% 43.3% 45.9%\nEmoticon Heuristic 59.4% 59.1% 58.9% 60.3% 63.4%\nTable 4: Some transfer learning examples from each knowledge source to SemEval 2016 where the\nGRU model successfully predicts sentiment when using the forgetting cost paradigm, but not witl\nfine-tuning based integration.\nConsidering that we have shown a neural network can distill and improve a representation learnec\nby a logical rule engine, how the final representation differs from the logic of the original engine\nis of practical interest. We thus compare the agreement of our fine-tuned rule based GRU with the\noriginal rule model on the SemEval testing set. We find that the transferred model achieves 78.7%\nagreement with the rule model when the rule model is right. This clearly indicates that our fina\nmodel is not deterministic based on the rule engine, and has a probability of adding errors ever\nwhen the original rule model works well. However, our model actually has 44.7% accuracy on the\nexamples the rule model got wrong. Our approach yields significant gains in comparison to the\noriginal rule classifiers, improving from 57.8% to 64.4% test set accuracy before even incorporating\nin auxiliary knowledge sources.\nIn our experiments we tried to find a balance between an ensemble model that is powerful enoug!\nto have an adaptive weighted average decision function and not so powerful that it overfits on ou\nlimited training and validation data. Our model is quite similar in architecture to the gating networ!\ncomponent of a hierarchical mixture of experts model (Jacobs et al. 1991p, Jordan & Jacobs| 1994)\nWe tried our model over all four representations at once and found that it overfits. Our experiment\nshowed it is more effective to adopt a greedy ensembling strategy where all models are combinec\nwith the best performing model on the validation set at each phase until only two models are left\nFinally, these two models are combined with the same mechanism. suggest:\nthat a many element gating network can be improved with a sparsity constraint, but this did not wor!\nas well as the greedy strategy for our model and experiments.\nMore formally, for any two models A and B combined in an ensemble, we train the followin\nmechanism using Stochastic Gradient Descent:\nSource Tweet Label Fine-Tuning | Forgetting Cost\nLogical Rules | John Kasich should feel proud of his performance at the | Positive Neutral Positive\n#GOPDebate Thursday night. He looked more presi-\ndential than the rest of the field.\nLogical Rules @BrunoMars I\u2019m so tired of you dressing like you ain\u2019t | Negative Neutral Negative\ngot no money. You went from wearing Gucci loafers to\n6th grade boy Sketchers.\nLogical Rules @DavidVonderhaar loving the beta Vahn, even playing | Positive Neutral Positive\nit on PC with a PS4 controller without aim assist, can\u2019t\nwait for November 6\nMovie Reviews | Selena Gomez presented Amy Schumer with an award | Positive Negative Positive\nand a heap of praise at the Hollywood Film Awards on\november 1.\nMovie Reviews | mailjet: It\u2019s Fri...we mean Star Wars Day. May the force | Positive Neutral Positive\nbe with all of your emails! https://t.co/FbDdjiJVUT\nMovie Reviews | Straight Outta Compton\u2019s success hopefully convinces | Positive Neutral Positive\new Line Cinema to give Ice Cube the right budget for\nthe last Friday movie.\nEmoticons That ball Kris Bryant just hit is the 2nd farthest ball \u2019ve | Positive Neutral Positive\never seen hit. He is officially ridiculous.\nEmoticons This fandom\u2019s a mess omg, I wouldn\u2019t be surprise if to- | Negative Positive Negative\nmorrow there\u2019s a trend who says Niall\u2019s going to marry\nhis cousin #WeKnowTheTruth\nEmoticons Christians snapchat story makes me want to kill my- | Negative Neutral Negative\n\nself..like I feel like a depressed 8th grader going through\nthat emo phase\nTable 5: Empirical three way sentiment classification results on the SemEval 2016 Task 4 Subtasl\nA test set.\nwhere Yensemble is the prediction vector of the combined ensemble. 4 and jg are the output vecte\nof the individual models.\nUur ensembie model was trained on what w.\ntraining with early stopping. In the first phase\n\nas set aside as the validation data during the initial\nof combining, the model transferred from the logical\n\ntule source task was combined with each model. In the second phase, the model based on transfer\nfrom the binary movie review sentiment model was combined with each model. In the third phase,\nthe two remaining models were combined. The results of our ensemble in Table Plsugeest that it\n\nis possible to further improve the performance of a single sequential transfer mode\n\ncombining its predictions with models that hav\n\nby intelligently\ne other perspectives. This is because they are modeled\n\nusing different source tasks for prior knowledge. Impressively, our final distilled model surpasses\nresults from all prior models on the SemEval 2016 benchmark using the same final architecture\nof a 50 hidden unit GRU model that is clearly not even competitive when trained simply on the\n\ntask specific labeled data. The prior best model SwissCheese (Bethard et al. 2016)\n\n) consists of\n\nrandom forests ensemble built utilizing multiple convolutional neural network models and distant\n\nsupervision. In fact, we achieve superior resu\ndata for training our model.\n\nts despite using over an order of magnitude less total\nWe would also like to underscore that our total improvement of 1.5% as a result of creating an er\nsemble with our best transferred model from the logical rule source task can be viewed as quit\ndisappointing, despite achieving state of the art results. In fact, in the theoretical limit of having\ndecision model that switches to the best already learned model at each point, our four transferre\nrepresentations would achieve 85.1% accuracy together. For the combination of the movie revie\\\nbased models and logical rule based model we can get to 81.4% accuracy. Moreover, we can g\u00a2\n76.5% accuracy with just the logical rule based transfer model and the emoticon prediction base\ntransfer model. Unfortunately, we achieve nowhere near these theoretical results despite represer\ntations that are apparently quite diverse. This seems indicative that there are significant gains yet t\nbe uncovered in integrating these representations.\nba\n( AYA\nna =0(W +\n\nbp\n( BYB\nnp =oa(W +\n\nMA\n= mn\n\u201cas ma +\nMB\naB=\n\nma +mB\naA =\n\nma +mB\naBR=\n\nMB\n\nmatmpB\nYensemble = GAYA + ABYB"}, {"section_index": "7", "section_name": "7 CONCLUSION", "section_text": "We consider a new methodology called the forgetting cost for preventing the catastrophic forgetting\nproblem of neural network sequential transfer learning. The forgetting cost is practical and easy tc\nimplement. We have demonstrated for the challenging task of Twitter sentiment analysis that it car\nuncover significant gains in generalization performance and that it seems to not forget knowledge\ntraditionally forgotten from the source task during fine-tuning. Our strong empirical results still mo-\ntivate multiple avenues with high potential for continued exploration in text analytics. Using logical\nrules to improve neural network models is a promising direction for humans to efficiently contribute\nto increased model performance. Additionally, the large diversity of representations learned from\nmultiple classifiers with the same target task but different source tasks seems to indicate there is\npotential to see even much greater gains when integrating multiple sources of knowledge transfer."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Giuseppe Attardi and Daniele Sartiano. Unipi at semeval-2016 task 4: Convolutional neural net\nworks for sen-timent classification. Proceedings of SemEval, pp. 220-224, 2016.\nYoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient\ndescent is difficult. Neural Networks, IEEE Transactions on. 5(2):157\u2014166, 1994.\nTianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge\ntransfer. arXiv preprint arXiv:1511.05641, 2015.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of\ngated recurrent neural networks on sequence modeling. arXiv preprint arXiv: 1412.3555, 2014.\nArtur S d\u2019Avila Garcez, Krysia Broda, and Dov M Gabbay. Neural-symbolic learning systems\nfoundations and applications, 2012.\nAlec Go, Richa Bhayani, and Lei Huang. Twitter sentiment classification using distant supervision\n2009.\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv\npreprint arXiv: 1503.02531, 2015.\nGeoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural\nnetworks. Science, 313(5786):504\u2014507, 2006.\nSteven Bethard, Daniel M. Cer, Marine Carpuat, David Jurgens, Preslav Nakov, and Torsten\nZesch (eds.). Proceedings of the 10th International Workshop on Semantic Evaluation,\nSemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016, 2016. The Associa-\n\ntion for Computer Linguistics. ISBN 978-1-941643-95-2. URL\nanthology/S/S16/\nBrage Ekroll Jahren, Valerij Fredriksen, Bj\u00e9rn Gambiick, and Lars Bungum. Ntnusenteval a\nsemeval-2016 task 4: Combining general classifiers for fast twitter sentiment analysis. Proceed.\nings of SemEval, pp. 103-108, 2016.\nMichael I Jordan and Robert A Jacobs. Hierarchical mixtures of experts and the em algorithn\nNeural computation, 6(2):181\u2014214, 1994.\nZhizhong Li and Derek Hoiem. Learning without forgetting. In European Conference on Computer\nVision, pp. 614-629. Springer, 2016.\nDavid Lopez-Paz, L\u00e9on Bottou, Bernhard Sch\u00e9lkopf, and Vladimir Vapnik. Unifying distillation\nand privileged information. stat, 1050:26, 2016.\nAndrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christophe:\nPotts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting\nof the Association for Computational Linguistics: Human Language Technologies-Volume 1, pp\n142-150. Association for Computational Linguistics. 2011.\nJacob MJ Murre. Learning and categorization in modular neural networks. 1992.\nMahmoud Nabil, Mohamed Aly, and Amir F Atiya. Cufe at semeval-2016 task 4: A gated recurren\nmodel for sentiment classification. Proceedings of SemEval. pp. 52\u201457,. 2016.\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for worc\nrepresentation. In EMNLP, volume 14, pp. 1532-1543, 2014.\nMatthew Riemer, Sophia Krasikov, and Harini Srinivasan. A deep learning and knowledge transfer\nbased architecture for social media user characteristic determination. SocialNLP 2015 at NAACL,\npp. 39, 2015.\nAnthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2):\n123-146, 1995.\nAnthony Robins. Consolidation in neural networks and in the sleeping brain. Connection Science,\n8(2):259-276, 1996.\n7hiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. Harnessing deep neural\nnetworks with logic rules. arXiv preprint arXiv: 1603.06318. 2016.\nAnkit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On-\ndruska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks fo!\nnatural language processing. arXiv preprint arXiv:1506.07285, 2015.\nSebastian Ruder, Parsa Ghaffari, and John G Breslin. Insight-1 at semeval-2016 task 5: Deep learn\ning for multilingual aspect-based sentiment analysis. arXiv preprint arXiv: 1609.02748. 2016.\nAndrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirk-\npatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distil-\nlation. arXiv preprint arXiv:1511.06295, 2015.\nAndrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray\nKavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint\narXiv: 1606.04671, 2016.\nRichard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng,\nand Christopher Potts. Recursive deep models for semantic compositionality over a sentiment\ntreebank. In Proceedings of the conference on empirical methods in natural language processing\n(EMNLP), volume 1631, pp. 1642. Citeseer, 2013.\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representation:\nfrom tree-structured long short-term memory networks. arXiv preprint arXiv: 1503.00075, 2015.\nsebastian Thrun. Is learning the n-th thing any easier than learning the first? Advances in neurc\ninformation processing systems, pp. 640-646, 1996.\nGeoffrey G Towell, Jude W Shavlik, and Michiel O Noordewier. Refinement of approximate do.\nmain theories by knowledge-based neural networks. In In Proceedings of the Eighth Nationa\nConference on Artificial Intelligence. Citeseer. 1990."}, {"section_index": "9", "section_name": "A MAPPING SENTIMENT RULES TO SOFT TARGETS", "section_text": "The gazetteer based logical rule engine separates sentences and phrases in the text. It then applies\ndictionaries of positive and negative sentiment words and phrases to the corresponding text. For\neach positive or negative phrase found, it checks to see if negation or double negation are applied,\nand modifies the polarity of the sentiment accordingly. The result for any piece of text is a count\nof positive and negative sentiment occurrences. For this task, we simply count the total number of\npositive and negative indicators to give an overall positive, negative or neutral score. To be concrete,\nwe have a simple procedure for mapping positive and negative word counts to soft labels that could\nbe used for distillation. If there are no positive or negative words, the output vector is a one hot\nvector corresponding to a neutral label. If there are an unequal number of positive and negative\nsentiment words, the neutral label is zero and the raw counts are sent to the softmax function to\ncreate a soft label over the positive and negative word occurrences. Finally, if there are an equal\namount of positive and negative words, we consider the added total sentiment words plus one in the\nneutral label as well as the number of positive words and negative words before sending these totals\nthrough a softmax function."}, {"section_index": "10", "section_name": "B SIZE SELECTION FOR THE RULE DISTILLATION TASK", "section_text": "In Table[6]we detail the performance of distilling a logical rule engine into a GRU based recurrent\nneural network by imposing soft labels over unlabeled tweets. The fact that we keep our word rep-\nresentations fixed with general purpose unsupervised data makes it difficult for the GRU to distill\nthe entire model without a large number of examples. Additionally, as there were a large number\nof examples in our distillation experiments, we did not experience high run to run variation and\nonly trained a single GRU model for each distillation experiment (as opposed to picking the best\nvalidation error of 10 parallel training routines as in our transfer experiments). Our distilled GRU is\nAdriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and\nYoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv: 1412.6550, 2014.\nHidden Units | Examples | Alignment with Teacher | Accuracy on SemEval Test Set\n25 50,000 88.3% 59.1%\n25 300,000 91.9% 58.6%\n50 50,000 88.6% 58.9%\n50 300,000 93.0% 58.5%\n75 50,000 88.7% 58.9%\n75 300,000 93.6% 58.3%\n100 50,000 88.6% 58.7%\n100 300,000 93.8% 58.1%\n125 50,000 88.5% 58.7%\n125 300,000 93.7% 58.3%\n150 50,000 88.5% 59.0%\n150 300,000 94.0% 58.5%\nTable 6: Logical rule engine distillation performance and SemEval 2016 Task 4 Subtask A accuracy\nas a function of the number of hidden units in the GRU and the number of training examples. The\n50 hidden unit and 50,000 training example model performs the best on the SemEval training set.\nbetter on the testing set than the original classifier, likely because this input representation prevent\nthe model from overfitting to the idiosyncrasies of the rule engine. This actually underscores ai\nimportant point for the distillation of abstract knowledge. If the target task is known during distil\nlation, it may be beneficial to stop short of totally distilling the original knowledge as it may hut\ndown stream performance past a certain point. We impose a simple policy where the best hidde:\nunit and training example combination is selected based on performance on the training data of th\ntarget task. As a result, we use the model with 50 hidden units based on 50,000 training examples it\nour experiments integrating with other knowledge. This model is a pretty good one to choose, an\nachieves high transfer performance relative to models that overfit on the teacher network."}]
HJjiFK5gx
[{"section_index": "0", "section_name": "NEURAL PROGRAM LATTICES", "section_text": "Chengtao Li *\nMassachusetts Institute of Technology\nCambridge, MA 02139, USA\n{dtarlow, algaunt,mabrocks, nkushman}@microsoft .com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A critical component of learning to act in a changing and varied world is learning higher-level\nabstractions of sequences of elementary tasks. Without such abstractions we would be forced to\nreason at the level of individual muscle contractions, making everyday tasks such as getting ready\nfor work and making dinner almost impossible. Instead, as humans, we learn a hierarchy of skills\nstarting with basic limb movements and eventually getting to the level of tasks such as get ready\nfor work or drive to the airport. These abstractions have many different names. For example, in\ncomputer programming they are called functions or subroutines and in reinforcement learning they\nare called options or temporally extended actions. They facilitate learning in two important ways.\nFirst, they enable us to learn faster, i.e. with lower sample complexity. Second, they enable us to\nstrongly generalize from our prior experience so that we can, for example, drive to a new location\nonce we have learned how to drive to a few other locations.\nA primary mechanism used for learning is watching others perform a task. During such demon-\nstrations, one typically observes the elementary operations performed, such as the movements of\nindividual limbs or the mouse clicks in a computer interface. In some cases, the demonstrations can\nalso provide supervision of the abstract operations (i.e., the abstraction hierarchy) that generated\nthe elementary operations, either through a formal annotation process or through informal natural\nlanguage descriptions. Recent work on Neural Programmer-Interpreters, NPI\nhas shown that when the training data includes both elementary and abstract operations,\nlearning the abstractions results in strong generalization capabilities. This enables, for example, the\nability to add verv large numbers when trained only on the addition of relatively small numbers.\n\u201cWork done primarily while author was an intern at Microsoft Research."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We propose the Neural Program Lattice (NPL), a neural network that learns to per-\nform complex tasks by composing low-level programs to express high-level pro-\ngrams. Our starting point is the recent work on Neural Programmer-Interpreters\n(NPI), which can only learn from strong supervision that contains the whole hi-\nerarchy of low-level and high-level programs. NPLs remove this limitation by\nproviding the ability to learn from weak supervision consisting only of sequences\nof low-level operations. We demonstrate the capability of NPL to learn to perform\nlong-hand addition and arrange blocks in a grid-world environment. Experiments\nshow that it performs on par with NPI while using weak supervision in place of\nmost of the strong supervision, thus indicating its ability to infer the high-level\nprogram structure from examples containing only the low-level operations.\nProviding supervision of the abstract operations during a demonstration requires significant addi-\ntional effort, however, and so in typical real-world scenarios we will observe only the elementary\noperations. For example, we can see a person\u2019s limbs move (elementary operations), but we can-\nnot see the mental states that led to these movements (abstract operations). In the same vein, we\ncan easily capture a user\u2019s clicks in an online application or their real-world movements using a\nskeletal tracking depth camera (Microsoft Corp. Redmond WA), NPI cannot directly be applied on\ndata like this, however, because the data does not contain the abstraction hierarchy. This motivates\nthe desire for a model which can learn an abstraction hierarchy from only sequences of elementary\noperations, but this is an ill-posed problem that requires either additional modeling assumptions or\nsome strongly supervised data. In this work, we take a first step by assuming access to a small\nnumber of strongly supervised samples that provide the components of the abstraction hierarchy\nand disambiguate which of infinitely many abstraction hierarchies are preferred. While we currently\nonly consider domains without noise, we believe our work provides a starting point for future re-\nsearch on adding additional modeling assumptions that could remove the need for strong supervision\naltogether.\nAHCI ale sOvelral lOCiHiiCal ISSUCS Uldl alis\u20ac il GeVCLOpPIS INTL, Willi die d@aresse@ ill UMS Paper.\nIn section [2] we reformulate the NPI model to explicitly include a program call stack, which is\nnecessary for the later modeling developments. Next we need to formulate a training objective for\nweakly supervised data instances. Ideally we could treat the abstract operations as latent quantities\nand optimize the marginalized log probability that arises from summing out the abstract operations.\nHowever, there are exponentially many such abstraction hierarchies, and so this is computationally\nintractable. To overcome this challenge, we compute an approximate dynamic program by building\non two ideas from the literature. First, we draw inspiration from Connectionist Temporal Classifica-\ntion, CTC (Graves et al.| |2006), observing that it provides a method for learning with latent align-\nments. In section e reformulate the CTC objective into a feedforward process that executes a\ndynamic program. Applying this to our problem, however, requires handling the program call stack.\nIn section|3.2|we do this through an approximation analogous to that of Stack-Augmented Recurrent\nNets, StackKRNNs (Joulin & Mikolov)|2015), resulting in a fully-differentiable feedforward process\nthat executes a dynamic program to approximately compute the marginalized log probability that we\ndesire. Finally, we observe in section |3.3]that there are alternative dynamic programs for approxi-\nmating the desired marginalized log probability and present one that uses more computation to more\nclosely resemble the exact (exponentially expensive) dynamic program while remaining tractable.\nOur key contributions can be summarized as follow"}, {"section_index": "3", "section_name": "2 MODEL BACKGROUND", "section_text": "The NPI model is based on a Recurrent Neural Network (RNN) which, at each step, either calls a\nabstract program, performs an elementary operation, or returns from the current program. To mak\u00ab\nthis decision, each step of the RNN takes as input: (1) a learnable embedding of program to execute\n(2) embedded arguments for this program, and (3) an embedding of the current world state. Callin;\nan abstract program resets the LSTM hidden state to zero and updates the program and argument:\nprovided as input to the following steps. Returning from an abstract program inverts this process\nrestoring the hidden state and input program and arguments to those from before the program wa:\ncalled. Performing an elementary operation updates the world state, but leaves the current progran\nand arguments in place, and performs the standard LSTM update of the hidden state.\nRather than present the details of the NPI model as in|Reed & de Freitas| (2016), we will cast it in\n\nthe formulation that we will use throughout the paper. The main difference is that our presentation\nwill explicitly maintain a call stack, which we will refer to as Stack-based NPI. Morally this does\nnot change the model, but it will enable the extension to weaker supervision described in section)3}\nThe basic structure of the reformulated model can be seen in Figure[]] The model learns a library of\nprograms, G, and arguments, R., to these programs, where each program g \u20ac R\u201d and each argument\ne We show how ideas from CTC and StackRNNs can be adapted and extended to enable the\ntraining of NPI-like models from only flat sequences of elementary operations and world states.\n\ne We introduce a method to compute a more accurate approximation of marginalized log proba-\nbilities in such models.\n\ne On the long-hand addition task from|Reed & de Freitas|(2016) and a new task involving arrang-\ning blocks in a grid-world, we demonstrate empirically that using NPL to train with elementary\noperation sequences combined with only a few training samples with full program traces can\nachieve similar performance to NPI but with weaker supervision.\nFigure 1: Stack-based NPI: Four time steps from the execution of the stack-based NPI model. Eact\ncolor/hash pattern represents a unique set of unchanging data values which, over time, move up anc\ndown (and in and out of) the stack. Operations below the dotted line to calculate the new world state\nare executed only at test time, since we do not have access to fworia at training time, and the training\ndata contains the correct sequence of world states.\nr \u20ac R\u2122 is represented as an embedding, with n and m as the embedding dimensions. When a\nprogram is called with a list of arguments it performs a sequence of actions, where each action is\none of: OP, PUSH, or POP. OP performs an elementary operation, e.g. move one step. PUSH calls\nto another program. POP returns from the current program back to the parent program.\nAn LSTM-based controller, shown in Figure}2}\nis used to generate the sequence of actions, de-\nciding the action at timestep t based on the cur-\nrently running program and arguments, g/,,, the\nLSTM\u2019s internal state h!,, and an observation\nof the current world state, w\u2019. To support calls\nto and returns from subprograms, the controller\nstate contains two call stacks, one for the inter-\nnal RNN state, which we denote as M (green\nin Figure[Tp, and one for the program and argu-\nments, which we denote as S' (red in Figure\nMj and S' refer to the elements at depth-d o\nthe stacks at timestep t.\nThe training data for NPI requires full execu-\ntion traces. We use 7 to denote all the observa-\ntions recorded in a single full exectution trace.\nSpecifically, for timestep t\u00a2 in the execution we\ndefine \u2018, to be the input world state, and 7!\nto be the decision of which of the following ac-\ntions to take:\nSworta\n\nFworta Fworta\nFigure 2: RNN Cell: A zoomed in view of the\ninternals of an RNN cell from FigureT]\nNote that, as with the original NPI model, we also include arguments for both the operation anc\nprogram calls, but for notational simplicity we subsume those into 7\u2019 and x* respectively.\nThe stack updates are formally defined as:\n[r= POP] M{ + [x! = op]ht,,, + [zi = PusH]O, d=0\nMit! = \u00a2 [nt = Pop] Ms + [xt = op] Mf} + [nt = PusH]at,,, d=1\n[ni = PoP) Mi,, + [ai = op]Mi+ [ri =PusH]Mj_,, d>1\ngtth = {tr POP] S| + [m4 = OP]Sh + [7 = PUSH] Gu, 4 = 0\nqd [xi = PoP] St,, + [ri = ops) + [ri = Pusu] si_,, d>0\nThe conditions in the Iverson brackets choose which type of update should be performed based on\nthe action type. POPing from the stack moves all items up one location in the stack. Performing an\nelementary OP, updates the top element of stack / to contain the new RNN hidden state but other-\nwise leaves the stacks unchanged. PUSHing onto the stack pushes the new program and arguments,\ngar\u00bb onto stack S, pushes a default (zero) hidden state onto stack M, and moves all of the othe\nelements in the stacks down one location.\nThe RNN cell inputs are:\nThe LSTM output is passed in parallel through four different decoder networks to generate the\nfollowing probability distributions:\nthe action\nthe program to be called\n\nP.\n\ncote\n\nthe arguments for the program o\nthe elementary operation to be p\nAt training time our objective is to find neural network parameters 9 which maximize the following\n(log) likelihood function:\npr\u2019) = [rt = oP] p(OP)p)(a\u2019) + [7 = PUSH] pi,(PUSH)pt (x) + [x!, = PoP] pi (Po!\nL\u00a3(0) =\n\nlog p(\n\nT)\nIn this section we introduce our core contribution, a new framework for training NPI-like model\nwhen the training data contains only sequences of elementary actions instead of full program abstrac\ntions. The basis of our framework is the Neural Program Lattice, which approximately compute:\nmarginal probabilities using an end-to-end differentiable neural network.\nIn this section, the training data is an elementary operation trace A, which includes a sequence of\nelementary steps, \\,, and a corresponding sequence of world states, A,,. For each elementary step,\nA\u2018, the elementary operation performed is \\/, and the input world state is \\*,. We define O as a\nmany-to-one map from a full execution trace 7 to it\u2019s elementary operation trace \u2019. With these\nhin, = Mo = the current LSTM internal state,\n\nGin = 56 = the current program and arguments,\n\nw' \u2014 x\u2019. = the current world state.\nhi, = Mo = the current LSTM internal state,\n\nGin = 56 = the current program and arguments,\n\nw= a, = the current world state.\nInside the RNN cell, as shown in Figure]2| g!,, and w are passed through a task specific encoder net-\nwork, fence to generate a combined embedding wu; which is passed directly into an LSTM\n& Schmidhuber||1997). Formally,\nul = fene(w\", g hot = fistm(u', hi,\nAt test time, we use a greedy decoder that makes the decision with the highest probability for each\nchoice. Formally:\nJour = O(pi,) = argmax,eg py (7)\nComputing this quantity is intractable because the number of possible executions |O~!(A)| is ex\nponential in the maximum length of 7 and each execution may have unique stack states. In the\nfollowing sections, we describe how to approximately compute this quantity so as to enable learning\nfrom weak supervision. To also learn from strong supervision, we simply add log p(7) terms to the\nobjective for each strongly supervised example 7."}, {"section_index": "4", "section_name": "3.1 CTC AS A FEED-FORWARD NETWORK", "section_text": "In formulating a loss function which approximates the exponential sum in equation 3.1} the first\nchallenge is aligning the elementary steps, ;, in the training data, to the timesteps, \u00a2, of the model.\nSpecifically, when the model calls into a program or returns from a program in a given timestep,\nit does not perform any elementary operation in that timestep. As a result, the alignment between\nelementary steps in the data and the timesteps of the model depends crucially on the choice of high-\nlevel abstraction. To overcome this challenge, we draw inspiration from CTC (Graves et al.|{2006).\nCTC is an RNN-based neural network architecture used in speech recognition to handle the analo\ngous problem of aligning audio sequence inputs to word sequence outputs. It can be seen as a com\nbination of an RNN and a graphical model. The RNN computes a distribution over possible output\nfor each timestep, while the graphical model consumes those distributions and uses a dynamic pro\ngram to compute the marginal distribution over possible label sequences. A crucial assumption i\nthat the RNN outputs at each timestep are conditionally independent, i.e. no feedback connection\nexist from the output layer back into the rest of the network. This assumption is incompatible witl\nthe NPI model because action decisions from timestep t determine the world state, hidden state, an\nprogram input for the next timestep. In section e will adapt the CTC idea to work in the NP\nsetting. In this section we prepare by reformulating CTC into a feed forward neural network tha\ncan be trained with standard back propagation.\nThe main challenge solved by CTC is finding the alignment between the elementary steps, i, ob:\nserved in the training data and the timesteps, t, of the model. To facilitate alignment discovery, the\noutput layer in a CTC network is a softmax layer with a unit for each elementary operation in O, the\nset of elementary operations, as well as one additional unit for a BLANK output where no elementary\noperation is performed because (in our case) the model calls into a new program or returns from the\ncurrent program. Define 8 \u20ac O/T as an output sequence over the alphabet O\u2019 = O U BLANK\nAdditionally, define the many-to-one map B from an output sequence 3 to A, the sequence of el.\nementary operations created by removing all of the BLANK outputs from \u00a7. As discussed above\nthe CTC model assumes that the RNN inputs at time \u00a2 are independent of the decisions made by\nthe model, 7. Thus for purposes of this subsection, we will assume both that h!,, = h'7}, and tha\nw= (w!,...,w?) and gin = (g},,---,93,) are provided as inputs and are thus independent of the\noutput decisions. We can then formally define\n\u2019 (POP|w, gin) + pt (PUSH|w, gin), 6\u00b0 = BLA\np'(B\"|w, gin) _ {ret | Fin) Pa |w, gin), 8 :\n\npi, (OP |w, gin) Ps, (B\"|w, Jin); otherwise\n|w|\n\np(Blw. gin) = |] p'(S' lw. gin)\nt=1\n\nL(6|Xo, W, gin) = log p(Ao|w, gin) =log SY \u2014 p(Blw, gin)-\nBEB- (AQ)\nThe dynamic program used by CTC to compute this likelihood is based on y*, the total probability\nthat as of timestep t in the model we have generated 1, the first i elementary actions in \\,. y! is\ndefinitions and p(7) as defined in equation [2.3] our desired (log) marginal likelihood for a single\nexample becomes\n\u00a3(0)=log SY) p(n).\n\nmEO-1(X)\ncalculated from w'** and gi:*, the first t elements in w and g;,, respectively. Formally,\nIn the last section we assumed that the RNN inputs w, and g;,, were defined independently of th\ndecisions 7 made by the model and that h!,, = h\u20187}. In this section we show how to relax thes\nassumptions to handle the full Stack-based NPI model described in section [2 The key idea is tha\nrather than propagating forward all possible stack states, which leads to a combinatorial explosion\nwe will propagate forward a single stack state which is a weighted average of all possible stac!\nstates, where the weights are computed based on local probabilities of actions at each timestep\nThis operation is analogous to that used in StackRNNs (2015). The result is\ntractable and differentiable forward execution process that no longer exactly computes the desire\nmarginal likelihood. However, we will show experimentally that learning with this model for weakl\nsupervised examples leads to the behavior that we would hope for if we were learning from the tru\nmarginal log likelihood. That is, we can share model parameters while training on strongly an\nweakly labeled examples, and adding the weakly labeled data improves generalization performance\nIn more detail, we estimate all quantities specified in 7 but not in \\ using a soft-argmax functior\nthat computes deterministic functions of the previously observed or estimated quantities. These\nestimated quantities are 7, 7,, and implicitly 7,,. Both 7, and 7, can be directly replaced with 2\nsoft-argemax as follows:\nReplacing decision 7! with a soft-argmax changes the stack updates from equation |2. 1]into differ.\nentiable stack updates similar to those used in{Joulin & Mikolov|(2015). Formally,\nv= >> p(Blw', gi)\n\nBeB-* (AL)\nyf = pl (Ab wl, gin\u2019 )ysy + p' (BLANK|w\"*\nThis formulation allows the likelihood to be computed in a feed-forward manner and the gradients\nof @ to be computed using standard back propagation through time. Note that if there were feedback\nconnections in the model, then it would not be sufficient to only use y! as the dynamic programming\nstate; we would need to keep track of all the different possible stack states after having produced the\nsequence prefix, which is what leads to the intractability.\nw '= Syl 1yt\n\nier\n\nJout = Psoft (p,) = =\u00bb py\nVEG\nLy?\nier\n\na!(a) = [Res aio Doblodea( a =oP\n(y'/yt*!)pt (a), a \u00a3 OP\n\na\u2018(POP)M{ + a! (OP)hi,,, + a'(PUSH)O, d=0\nMi** = 4 at (pop) Ms + at (op) Mi + at (PUSH)h bt, d=1\na'(POP) Mj, +a'(OP)M\u00a7+a'(PUSH)Mj_,, d>1\ngitl a\u2018 (POP) St + a!(OP)S\u00a7 + a! (PUSH) gout d=0\na \\at(pop)S4,, +a'(oP)S} + a!(PUSH)S4_1, d>0\nvith a introduced for notational simplicity. This change enables h!,, and g},, to now depend on the\nistribution over output decisions at time \u00a2 \u2014 1 via the stack, as g/,, = Sf and hj,, = Mj, where 5;\n\n\u2018in\nresp. M5) are computed from Sit and sit (resp. Mi! and the LSTM cell\u2019s output at timestey\n\\\nFigure 3: NPL lattice: Eacl\nresponds to one timestep, and\nin a timestep corresponds to \u00ab\ndepth, J, and elementary op\ndex, i. A subset of the lattice\nare shown with blue arrows\ntransitions, green for OP and\nPOP.\n\nBlurred Blurred All Paths Computational Gra\n\nStack World Return Cost Acc\n\nExecute All Paths False False True Highest E\nNPL True False True Medium Me\nCTC+StackRNN True True False Lowest Lo\nTable 1: Outlines the tradeoff between representational accuracy and computational cost for two\nextreme solutions and NPL.\n\u00a3(6) = log )> pl, (POP)y\n\nt<T\nThis gives a fully differentiable model for approximately maximizing the marginal probability of .\nAlthough the model we have defined so far is fully differentiable, the difficultly in training smoothe\nmodels of this form has been highlighted in the original Neural pore Machine work (Graves et al\n\nas well as much of the follow on work (Gaunt et al.| re or erat eae\net al.\\|2016 | Neelakantan et al.\\|2016 } Joulin & Mikolov)|2015). To Cre Roe ata alleviate this difficulty, w\n\nintroduce in this section the neural lattice structure after which Neural Program Lattices are namec\nTo motivate the need for this lattice, consider the set of possible program execution paths as a tree\nwith a branch point for each timestep in the execution and a probability assigned to each path. Exact\ngradients could be computed by executing every path in the tree, calculating the gradient for each\npath, and then taking an average of the gradients weighted by the path probabilities. This solution\nis impractical however since it requires computation and memory that scales exponentially with\nthe number of timesteps. To avoid this problem, the NTM and related techniques perform a single\nforward execution which is meant to approximately represent the simultaneous execution of all of\nthe paths in the tree. To avoid the exponential explosion, the state at each timestep, i.e. tree depth, is\napproximated using a fixed-sized, representation. The approximation representation chosen by both\n\nNTM and|Joulin & Mikolov is a soft-argmax of the states generated by performing each of\n\nthe possible actions on the previous approximate state.\nWe observe that these two choices are really extreme points on what is a continuous spectrum o!\noptions. Instead of choosing to maintain a separate state representation for every path, or to grou\ntogether all paths into a single representation, we can group together subsets of the paths and main:\ntain an approximate state representation for each subset. This allows us to move along this spectrum\nby trading higher memory and computational requirements for a hopefully closer approximation o!\nthe marginal probability.\n\u2018igure 3: NPL lattice: Each slice cor-\nesponds to one timestep, and each node\nn a timestep corresponds to a given call\nlepth, 7, and elementary operation in-\nlex, 7. A subset of the lattice transitions\nre shown with blue arrows for PUSH\nransitions, green for OP and orange for\nOP.\nThe last remaining complexity is that A does not indicate the necessary number of model timesteps.\nThus the likelihood function must sum over all possible execution lengths up to some maximum T\nand ensure that the final action is a return, i.e. POP. If we define J = |X,| then formally,\nIn our implementation we group together execution paths at each timestep by call depth, / \u20ac L\nand number of elementary operations performed so far, i \u20ac J, and maintain at each timestep <\nseparate embedded state representation for each group of execution paths. Thus the unrolled linea\narchitecture shown in Figure [I] becomes instead a lattice, as shown in Figure |3| with a grid o!\napproximate program states at each timestep. Each node in this lattice represents the state of al\npaths that are at depth / and elementary operation i when they reach timestep t. Each node contain:\na soft-argmax of the stack states in MZ and S' and an RNN cell identical to that in Figure [Jj] Fo\neach node we must also compute y; \"the probability that at timestep \u00a2 the execution is at depth |\nand at elementary operation 7 and has output the elementary operation sequence Xj .;. As before we\ncan compute this recursively as:\n(POP)y;\n\ntl+1\n\ntL\n+ Doi (OP) pet LOd)ye, +p\n\nt,l-1\nai\n\n(PUSH)y,;\u2019\n\ntl\na\nt+1,1 tl4+1\n\n41\nyi\" = pl (pop)yp*\n\nil i-1 t-1\n+ pe (OP )pih 1%) +p (PUSH)yi\"*.\nSimilarly, the averaged call stack values are computed recursively as follows:\nWe have left out the boundary conditions from the above updates for readability, the details of these\nare discussed in Appendix/A.4]\nL(0) = log So pe ( POP) yee .\n\ntEeT\nRemark: The specific choice to group by elementary operation index, and call depth was motivated\nby the representational advantages each provides. Specifically:\nTable/I]summarizes these advantages and the computational trade-offs discussed earlier\nFinally, in practice we find that values of the y\u2019s quickly underflow, and so we renormalize them at\neach timestep, as discussed in Appendix|A.3"}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "tli le\ntiie\n\n(a)\n\npeti\nMii\n\ngithl\n\n{:\n\nl yl. 1\na a pe (0\nae \u2018he, pop) hea\n\na be, \u201c(pop Meith 4\n\nath (pop) sy! 41 +a 1,\n\nt he, 1 t ety\na; (POP )Saia 3 +\n\ntL tll ;\n+04\" (OP) Pp 1006 Mai ear (PUSH)O,, j d=(\na a(OP 10%G My ita i POSE) hou in d=]\nt. \u00a2. gt l\u2014\n\nOg Ta(OP) Pe 1 (i) Mi 1+a;; 7 (PUSH)MG\");, d>1\ntl tl t1-1 tl-1\n\niTyalOP) Pos 1(A5) Soi 1+ O;% i 1 POSES out d=0\n\nt. t t, t\n\ni1,i(OP Po i\u2014 13) Sr 1+a;; \"(PUSH)S7 7 ;, d>0\nFinally, the likelihood function approximately maximizes the probability of paths which at any\ntimestep have correctly generated all elementary operations in \\, are currently at depth 0 and are\nreturning from the current program. Formally.\ne Grouping by elementary operation index: allows the model to represent the input worlc\nstate exactly instead of resorting to the fuzzy world state representation from equation[3.2|\n\ne Grouping by call depth: allows the representation to place probability only on executior\npaths that return from all subprograms they execute, and return only once from the top leve.\nprogram as specified in equation\nIn this section, we demonstrate the capability of NPL to learn on both the long-hand addition\ntask (ADDITION) from|Reed & de Freitas] (2016) and a newly introduced task involving arranging\nblocks in a grid-world (NANOCRAFT). We show that using the NPL to train with mostly the weak\nsupervision of elementary operation traces, and very few full program traces, our technique sig-\nnificantly outperforms traditional sequence-to-sequence models, and performs comparably to NPI\nmodels trained entirely with the strong supervision provided by full program traces. Details of the\nexperimental settings are discussed in Appendix{A.5]\nNANOCRAFT ,, PUSH\n\nFigure 4: NANOCRAFT: |\nillustrative example progra\nwhere the agent (denoted\n\u201c*) is required to build 3;\nrectangular red wooden bui\ning at a certain location\na 6x6 grid world. We c\nsee that some of the bloc\nare already in place in t\ninitial world-state. To bu\nthe building, the agent (p1\ngram) first makes two calls\nMOVE_MANY to move into plz\nin the X and Y dimensions, a\nthen calls BUILD_WALL fe\ntimes to build the four walls\nthe building.\n\nMOVE_MANY (right) , PUSH\n\nL ACT_MOVE(right), STAY\n<END>, POP\n\nMOVE_MANY (down) , PUSH\n\nL ACT_MOVE(right), STAY\n\n<END>, POP\nBUILD_WALL (right) , PUSH\n\nPLACE_AND_MOVE(right) ,PUSH\n\nL ACT_MOVE (right), STAY\n\nACT_PLACE_BLOCK(wood, red) ,STA\\\n<END>, POP,\n\nPLACE_AND_MOVE(right) ,PUSH\nACT_MOVE (right), STAY.\n<END>, POP\n\n<END>, POP.\n\nBUILD_WALL (down) , PUSH\n<END>, POP.\n\n<END>, POP\n\nNANOCRAFT WITH FULL WORLD\n\nDO,\n\n0.2\n\n1 ACCURACY\n\n# FULL\n\nC0 G eee\n\n==em= NPI mem NPL-64 \u2014=e==NPL-128 \u2014=e==NPL-256 =*@='Seqg-64 = *'Seq-128 ==\u00a2='Seq-256\nFigure 5: NANOCRAFT Sample Complexity: The x-axis varies the number of samples containing\nfull program abstractions, while the y-axis shows the accuracy. NPL-{64, 128,256} shows the accu-\nracy of our model when trained with 64/128/256 training samples. NPI shows the accuracy of NPI,\nwhich can utilize only the samples containing full program abstractions. Finally, Seq-{64, 128,256}\nshows the accuracy of a seq2seq baseline when trained on 64/128/256 samples. It\u2019s performance\ndoes not change as we vary the number of samples with full program abstractions since it cannot\nutilize the additional supervision they provide."}, {"section_index": "6", "section_name": "4.1 SAMPLE COMPLEXITY", "section_text": "Task: We study the sample complexity using a task we call NANOCRAFT. In this task we consider\nan environment similar to those utilized in the reinforcement learning literature. The perceptual\ninput comes from a 2-D grid world where each grid cell can be either empty or contain a block with\nboth color and material attributes. The task is to move around the grid world and place blocks in\nthe appropriate grid cells to form a rectangular building. The resulting building must have a set of\nprovided attributes: (1) color, (2) material, (3) location, and sizes in the (4) X and (5) Y dimensions.\nAs shown in the example in Figure}4} at each step the agent can take one of two primitive actions,\nplace a block at the current grid cell with a specific color and material, or move in one of the four\n\u2018with additional indexes for 7 and / on all of the inputs and outputs\nPISue ft. INANUURAR EL. syil\nillustrative example program,\nwhere the agent (denoted as\n\u201c*) is required to build 3x4\nrectangular red wooden build-\ning at a certain location in\na 6x6 grid world. We can\nsee that some of the blocks\nare already in place in the\ninitial world-state. To build\nthe building, the agent (pro-\ngram) first makes two calls to\nMOVE_MANY to move into place\nin the X and Y dimensions, and\nthen calls BUILD_WALL four\ntimes to build the four walls of\nthe building.\nADD, PUSH\n\n\u2018ADD1, PUSH\n\nACT_WRITE(3) STAY\n\nCARRY, PUSH\nACT_PTR_MOVE(1, left), STAY\nACT_WRITE(1), STAY \u2014\u2014___\u2014>}\nACT_PTR_MOVE(1, right), STAY\n<END> , POP\n\nLSHIFT, PUSH\nACT_PTR_MOVE(@, left), STAY\nACT_PTR_MOVE(1, left), STAY\nACT_PTR_MOVE(2, left), STAY\nACT_PTR_MOVE(3, left), STAY\n<END> , POP\n\nL, <END>, Pop.\n\nADD1, PUSH\n\nACT_WRITE(7) ,STAY\n\nLSHIFT, PUSH\nACT_PTR_MOVE(@, left), STAY\nACT_PTR_MOVE(1, left), STAY\nACT_PTR_MOVE(2, left), STAY\nACT_PTR_MOVE(3, left), STAY\n<END> , POP i\n\nLy <END>, PoP\n\nLy <enp>, pop |] @\" 7:3\ncardinal directions. We explored both a fully observable setting, and a partially observable setting\nIn the fully observable setting, the world is presented as a stack of 3 grids, one indicating the materia\nof the block at each location (or empty), a similar one for color and a final one-hot grid indicatin;\nhe agent\u2019s location. In the partially observable setting, the agent is provided only two integers\nndicating the color and material of the block (if any) at the current location. Finally, in both setting\nhe world input state contains an auxiliary vector specifying the five attributes of the building t\noe built. In each sample, a random subset of the necessary blocks have already been placed in th\nworld, and the agent must walk right over these locations without placing a block.\nExperiment Setup: We assume that data with full programmatic abstractions is much more diffi-\ncult to obtain than data containing only flat operation sequences||so we study the sample complexity\nin terms of the number of such samples. All experiments were run with 10 different random seeds.\nand the best model was chosen using a separate validation set which is one-quarter the size of the\ntraining set.\nResults: Figure|5|shows the sample complexity for the NANOCRAFT task in the fully observable\nsetting. We can see that NPL significantly outperforms the NPI baseline (NPJ) when only a subset\nthe total training samples have full abstractions. NPL similarly outperforms a sequence-to-sequence\nbaseline (Seq-*) trained on all of the available data. We also performed preliminary experiments for\nthe partially observable setting, and obtained similar results."}, {"section_index": "7", "section_name": "4.2 GENERALIZATION ABILITY", "section_text": "Task: We study generalization ability using the ADDITION task from{Reed & de Freitas|\n\nThe objective of this task is to read in two numbers represented as digit sequences and compute the\ndigit sequence resulting from the summation of these two numbers. The goal is to let the mode\nlearn the basic procedure of long-hand addition: repeatedly add two one-digit numbers, write dowr\nthe result (and the carry bit if necessary) and move to the left until the beginning of the number:\nis reached. The whole procedure is represented using a four-row scratch pad, where the first anc\nsecond rows are input digit sequences, the third row is the carry digit and the forth row the result\nThe model is provided a world-state observation which only provides a partial view into the ful\nscratchpad state. Specifically, it is provided the integers at the location of four different pointers\neach in one row of the scratchpad. The model has two possible elementary operations, either move\na pointer left or right, or write a single digit into one of the four pointer locations. All four pointer:\nstart at the rightmost location (the least significant digit), and are gradually moved to the left by the\nOperation sequences can be obtained by observing a human demonstrating a task, whereas full abstractions\nrequire additional effort to annotate such traces.\nFigure 6: ADDITION: An 1I-\nlustrative example program o!\nthe addition of 25 to 48. We\nhave four pointers (denotec\nas \u201c*\u201d) one for each row\nof the scratch pad. We re-\npeatedly call ADD1 until we\nhit the left most entry in the\nscratch pad. Each call te\nADD1 we call ACT_WRITE tc\nwrite the result, CARRY tc\nwrite the carry digit (if nec-\nessary) and LSHIFT to shif\nall four pointers to the left tc\nwork on the next digit. The\ndigit sequence on the fourtt\nrow of scratch pad is the resul\nof the addition\nGENERALIZATION ON ADDITION\n\n2, 2. .\n2 \u2014\u2014 +\n1 I\n5 S\nos 3S\nPy T\n8 <,\noS 1\n\u20186.\n\u00b0, # DIGITS mY,\n\u2014\u2014e ey\n50 500\n\nmem S2S-Easy-16 mem S2S-Easy-32 =e NPI-1 =e= NPI-16 =e= NPL-16-1\nFigure 7: ADDITION Generalization Performance: The x-axis varies the number of input digits\nfor the samples in the test set, while the y-axis shows the accuracy. All models are trained on addition\nprograms with inputs of 1 to 10 digits. MPL-16-1 shows the accuracy of our model when trained\nwith 16 total samples (per number of digits), of which / sample (per number of digits) includes full\nprogram abstractions. NPI-1 and NPI-16 show the accuracy of the NPI model when trained with 1\ntotal samples and 16 total samples respectively (per number of digits), all containing full program\nabstractions. S2S-Easy-16 and S2S-Easy-32 show the performance of the $2S-Easy baseline when\ntrained with 16 and 32 samples respectively (per number of digits).\nprogram throughout the execution. Figure[6]gives an example of a full program trace as well as stat\nof the scratch pad at a particular timestep.\nExperiment Setup: A primary advantage of learning programmatic abstractions over sequence:\nis an increased generalization capability. To evaluate this, we train our model on samples ranging\nfrom | to 10 input digits . The training data contains an equal number of samples of each lengtt\n(number of digits), and includes full program abstractions for only one randomly chosen sample\nfor each length such that |FULL| = 10. We then test NPL using samples containing a much large!\nnumber of digits, ranging up to 1,000. On this task we found that both our model and the origina\nNPI model were somewhat sensitive to the choice of initial seed, so we sample many different seed:\nand report both the mean and standard deviation, using a bootstrapping setup Efron & Tibshirani\n\n(1994)) which is detailed in Appendix[A.6.2]\nCompared Models: We originally compared to a standard flat LSTM sequence model. However,\nwe found that even with 32 samples per digit such a model was not able to fit even the training\ndata for samples with more than 4 or 5 digits, so we did not present these results[?| Instead, we\ncompare to a model called $2S-Easy, which is the strongest baseline for this task from (Reed &\n. This model is custom-designed for learning addition and so it represents a very\nstrong baseline. We discuss the model details in Appendix[A.6.1] For completeness we also compare\nto a reimplementation of NPI in two different training regimes.\nResults: Figure[7]shows the generalization capabilities of our model on the ADDITION task. Our\nmodel with \u201cone-shot\u201d strong supervision (VPL-1/6-/) significantly outperforms the S2S-Easy base-\nline even when the baseline is provided twice as many training samples (S2S-Easy-32). This i\nparticularly notable given that the S2S-Easy model is specifically designed for the addition task.\nThis result highlights the generalization capabilities our model brings by learning the latent struc-\ntures which generate the observed sequences of elementary operations. Furthermore, we can see that\nthese latent structures are learned mostly from the unlabeled sequences, since the vanilla NPI mode\ntrained with only 1 sample per digit (NPI-1) cannot generalize beyond the 10-digit data on whict\nit was trained. Finally, we can see that just a single fully supervised sample is sufficient since i\nenables our model to perform comparably with a vanilla NPI model trained with FULL supervisior\nfor all samples (NPI-16)."}, {"section_index": "8", "section_name": "5 RELATED WORK", "section_text": "Neural Programs Training neural networks to perform algorithmic tasks has been the focus of\nmuch recent research. This work falls into two main categories: weakly supervised methods that\nlearn from input-output examples, and strongly supervised methods that additionally have access to\nthe sequence of elementary actions performed to generate the output.\n[he work on learning algorithms from sequence data has utilized both related techniques to our:\nis well as tackled related tasks. The most related techniques have augmented RNNs with variou:\nittention and memory architectures. In addition to those we have discussed earlier (Reed & de Fre\ntas} 2016} Joulin & Mikolov| 2015), Grefenstette et al.|(2015) proposes an alternative method fo\n1ugmenting RNNs with a stack. From a task perspective, the most related work has considered vari\nints of the scratchpad model for long-hand addition, similar or our ADDITION domain. This worl\n1as focused largely on more standard RNN architectures, starting with (1993)\nwhich showed that the standard RNN architectures at the time [1990) coul\nsuccessfully generalize to test samples approximately 5 times as long as those seen during training\nf a few longer samples were included in the training set. More recently, [Zaremba et al.| (2015\nshowed that an RNN architecture using modern LSTM or GRU controllers can perfectly generaliz\no inputs 20 times as long as than those seen in the training data when trained in either a supervisec\nyr reinforcement learning setting. However this work was focused on trainability rather than dat\nfficiency and so they utilized hundreds of thousands of samples for training.\nNPI (Reed & de Freitas| |2016) and NPL distinguish themselves from the above work with the\n\nexplicit modeling of functional abstractions. These abstractions enable our model, with only 16\nsamples, to perfectly generalize to data sequences about 100 times as long as those in the training\ndata. Furthermore, concurrent work has shown that an unmodified NPI model can be\ntrained to perform more complex algor s such as BubbleSort, QuickSort and topological sorting\nby learning recursive procedures, and we expect that our method can be directly applied to reduce\nthe amount of needed supervision for these tasks as well.\nReinforcement Learning In the reinforcement learning domain the most related work to ours is\n\nthe options framework, for building abstractions over elementary actions (Sutton et al.|{T999). This\nframework bears many similarities to both our model and to NPI. Specifically, at each time step the\nThe work on\n\ntiveness of the Neural Turing Machine (NTM) (Graves et al. 2014). Similar to NTMs, many of the\nproposed architectures have used differentiable memory (Kurach et al.| |2016}|Graves et al.||2016\n\nWeston et al.\net al.|/2016), while others have used REINFORCE\n\nuse sampling-\n\n& Sutskever| 2015). Some of this work has considered learning addition from input-output samples,\n\na similar, but\n\nmakes use of a few training tricks to enable a standard LSTM to learn to add numbers up to length\n\nlearning neural programs from input-output data was sparked by the surprising effec-\n\nNeelakantan et al.} [2016 2016} |Feser\n(Williams} |1992) to train neural networks that\n\nbased components to model memory access (Andrychowicz & Kurach||2016}{Zarembe\n\n2014} |Sukhbaatar et al.||2015b\n\nmore challenging setup than our ADDITION domain. [Zaremba & Sutskever| (2014)\n\n9 when training on numbers of the same length. |Kalchbrenner et al.|(2015) proposes an architec-\n\nture that is ab\n\nNeural GPU model from (Kaiser & Sutskever||2015) learns to add binary numbers 100 times longer\nthan those seen during training, but requires tens of thousands of training samples and extensive\nhyperparameter searches. Additionally, using a decimal instead of binary representation with the\n\ne to learn to add 15-digit numbers when trained on numbers of the same length. The\n\nNeural GPU model (as in our ADDITION task) is also reported to have a significant negative impact\non performance.\nagent can choose either a one-step primitive action or a multi-step action policy called an option\nAs with our procedures, each option defines a policy over actions (either primitive or other options:\nand terminates according to some function. Much of the work on options has focused on the tabula\nsetting where the set of possible states is small enough to consider them independently. More recen\nwork has developed option discovery algorithms where the agent is encouraged to explore regions\nthat were previously out of reach (Machado & Bowling| 2016) while other work has shown the\nbenefits of manually chosen abstractions in large state spaces (Kulkarni et al.|/2016). However\noption discovery in large state spaces where non-linear state approximations are required is stil\n\nan open problem, and our work can be viewed as a method for learning such options from exper\ntrajectories.\nMuch work in reinforcement learning has also considered domains similar to ours. Specifically\ngrid-world domains similar to NANOCRAFT are quite standard environments in the reinforcemen\nlearning literature. One recent example is /Sukhbaatar et al.|(2015a), which showed that even the\nstrongest technique they considered struggled to successfully perform many of the tasks. Thei\nresults highlight the difficultly of learning complex tasks in a pure reinforcement learning setup. In\nfuture work we would like to explore the use of our model in setups which mix supervised learning\nwith reinforcement learning."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Making neural programming architectures generalize via recursion. 2016. Under submission t\u00a2\nICLR 2017.\nBradley Efron and Robert J Tibshirani. An introduction to the bootstrap. CRC press, 1994\nJeffrey L Elman. Finding structure in time. Cognitive science, 14(2):179-211, 1990.\nJohn K Feser, Mare Brockschmidt, Alexander L Gaunt, and Daniel Tarlow. Neural functional pro-\ngramming. arXiv preprint arXiv: 1611.01988, 2016.\nAlexander L Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathar\nTaylor, and Daniel Tarlow. Terpret: A probabilistic programming language for program induction\narXiv preprint arXiv: 1608.04428, 2016.\nIn this paper, we proposed the Neural Program Lattice, a neural network framework that learns a hi-\nerarchical program structure based mostly on elementary operation sequences. On the NANOCRAFT\nand ADDITION tasks, we show that when training with mostly flat operation sequences, NPL is able\nto extract the latent programmatic structure in the sequences, and achieve state-of-the-art perfor-\nmance with much less supervision than existing models.\nMarcin Andrychowicz and Karol Kurach. Learning efficient algorithms with hierarchical attentive\nmemory. arXiv preprint arXiv: 1602.03218, 2016.\nGarrison W Cottrell and Fu-Sheng Tsung. Learning simple arithmetic procedures. Connection\nScience, 5(1):37-58, 1993.\nAlex Graves, Santiago Fernandez, Faustino Gomez, and Jiirgen Schmidhuber. Connectionist tem-\nporal classification: labelling unsegmented sequence data with recurrent neural networks. In Pro-\nceedines of the 23rd international conference on Machine learnine. nn. 369\u2014376 ACM. 29006.\nMichael I Jordan. Serial order: A parallel distributed processing approach. Advances in psychology,\n121:471-495, 1997.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. JCLR, 2015\nMarlos C Machado and Michael Bowling. Learning purposeful behaviour in the absence of rewards\narXiv preprint arXiv: 1605.07700, 2016.\nMicrosoft Corp. Redmond WA. Kinect for Xbox 360.\nArvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent programs\nwith gradient descent. JCLR, 2016.\nScott Reed and Nando de Freitas. Neural programmer-interpreters. JCLR, 2016.\nSainbayar Sukhbaatar, Arthur Szlam, Gabriel Synnaeve, Soumith Chintala, and Rob Fergus. Maze-\nbase: A sandbox for learning from games. arXiv preprint arXiv: 1511.07401, 2015a.\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances\nin neural information processing systems, pp. 2440-2448. 2015b.\nRichard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework\nfor temporal abstraction in reinforcement learning. Artificial intelligence, 112(1):181\u2014211, 1999.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcemer\nlearning. Machine learning, 8(3-4):229-256, 1992.\nWojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615, 2014.\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent\nnets. In Advances in Neural Information Processing Svstems. pp. 190-198. 2015."}, {"section_index": "10", "section_name": "A.1 DATASET DETAILS", "section_text": "Table [2]lists the set of programs and elementary operations we used to generate the data for ADDI-\nTION and NANOCRAFT. The programs and elementary operations for ADDITION are identical tc\n\nthose in (2016). Note that when training with weak supervision the training dat:\ncontains only the elementary operations and does not contain the programs or arguments.\nTable 2: Programs, arguments and elementary operations used for generating training data of AD\nDITION and NANOCRAFT tasks."}, {"section_index": "11", "section_name": "A.2 IMPLEMENTATION DETAILS", "section_text": "Programs Description Calls\n\nADD Multi-digit addition ADD1\n\nADD1 Single-digit addition ACT_WRITE/CARRY/LSHIFT.\nCARRY Write carry digit ACT_PTR_MOVE/ACT_WRITE\nLSHIFT Shift four pointers left ACT_PTR_MOVE\n\nACT_WRITE Write result to environment Elementary Operation\nACT_PTR_MOVE Move pointer to left/right Elementary Operation\nNANOCRAFT Build a rectangular fence MOVE_MANY/BUILD_WALL\nMOVE_MANY Move multiple steps in one direction ACT_MOVE\n\nBUILD_WALL Build a wall along one direction PLACE_AND_MOVE\nPLACE_AND_MOVE Move one step and build a block ACT_MOVE/ACT_PLACE_BLOCK\nACT_MOVE Move one step to a direction Elementary Operation\n\nACT_PLACE_BLOCK\n\nBuild a block at current location\n\nElementary Operation\nHere we describe the implementation details of the various component neural networks inside our\nimplementation of the NPL. Note that the mappings are all the same for both ADDITION and\nNANOCRAFT except for fe,,- which is task dependent.\nfence for ADDITION: We represent the environment observation, (latent) programs and ar-\nguments as one-hot vectors of discrete states. We feed the concatenation of one-hot vectors\nfor environment observation and argument through a linear decoder (with bias) to get a uni-\nfied arg-env representation. We then embed the programs (via fembea) into an embedding\nspace. Finally we feed the concatenation of arg-env vector and program vector through a\n2-layer MLP with rectified linear (ReLU) hidden activation and linear decoder.\n\nfene for NANOCRAFT: We represent the environment observation as a grid of discrete\nstates. Here we first embed each entry into an embedding space, and then feed this embed-\nding through two convolutional layers and two MLP layers with ReLU hidden activation\nand linear decoder. We represent argument again as one-hot vectors and embed programs\ninto an embedding space. Finally we feed the concatenation of argument vectors, convolu-\ntional vectors of environment observation and program vector through a 2-layer MLP with\nReLU hidden activation and linear decoder.\n\nfistm: We employ a two-layer LSTM cell for the mapping. The size of the hidden states is\nset to 128 for both ADDITION and NANOCRAFT.\n\nfrog: This mapping will map the LSTM hidden state to a probability distribution over\nprograms. The hidden state output of fistm is mapped through a linear projection to an 8-\ndimensional space, and then another linear projection (with bias) with softmax generates\nPi\n\nfaction and fo): Each of these encoders will output a probability distribution. We feed the\ntop hidden states by fj.\u00a2) first through a linear projection (with bias) and then a softmax\nfunction to p!, and p!, respectively.\nWhen the operation sequence is too long, Ui * will become vanishingly small as t grows. To prevent\n\nour implementation from underflowing, we follow[Graves et al {2006 by renormalizing y; at each\n\ntimestep and storing the normalized values and normalization constant separately. The new update\nrule become:\ngt! = fl < Lp (pop) gi ** + [0 < djph_, (oP )ph Os) ge, + [0 < Up l(eusege!-\nGt = [ll < Lys (eop)ge'** + [0 < dpe, (op )ph (as) gi, + [0 < phy (eusy)g\u2014\nind we normalize the values and maintain a log-summation of the normalization constants\nyi=\u00a5' + log(S ai\"), ge = FP Das\n\nil\nlog(y't!) = log_sum_exp(log(y\u2019), log(p)\u00b0(POP)) + log (9\u00b0) +Y'\nal,\n)\n\nlog(y\u2019**) = log-sum_exp(log(y\"), log(p; (POP)) + log(g7) + Y*\nIn Section |3.3] we did not include the boundary conditions in our discussion to improve the read-\nability. Our implementation, however, must account for the bounds on J, and 7, as shown in Iverson\nbrackets in the full update equations below:\nAs mentioned before, NPL can be trained jointly with full program abstractions (referred to a:\nULL) as well as elementary operation sequences (referred to as OP). When training with FULI\nsamples, the training procedure is similar to that for NPI and we use this setting as one of ow\nbaselines. For each dataset on which we test NPL, we include mostly OP samples with only a smal\nnumber of FULL samples. We pre-train the model solely on FULL samples for a few iterations t\u00ab\nvet a good initialization. After that, in each step we train with a batch of data purely from FULL o\nOP based on their proportions in the dataset and generate the parameter update in that step using the\nsorresponding objective. For all tasks, we train the NPL using ADAM witl\npase learning rate of 10~* and batch size of 1. We decay the learning rate by a factor of 0.95 every\n10,000 iterations. These settings were chosen using a manual search based on performance on the\nvalidation data.\naf\" (@) =(uz\"/ i\n1< Lla;\n\nptt _\nMii =\n\nT YY\n\nt+1,1\n)\n\n1< La;\n\nl< Lla\n\nl< Lla\n\n1< La;\n\nS Pai\n\ntL\n\nPa,i(@)\n\ntl+1\n\ntl+1\n\nbd\n\nbid\n\ntl+1\n\n(POP\n(POP\n\n(POP\n\n(POP\n\n(POP\n\nEN Ti\n\nyt ltl\nMy; +\n\nyt ltl\nMy; +\n\nM> +1\n\nd+1,i 7\n\n[0 < day\" ,( (OP )pii OAg)h eihi it\n[0 < Yai! '(Pusk) 0, d=0\n[0 < dai! (oP pi (AS) Mp +\nfo< Hai\" '(Pusi) nee, d=1\n[0 < Jay! 1( (oP )pist 15) Mei +\n[0 < Jot! (euse) Me, d>1\n\nsyt+ [o< jay\" (OP)pii Oo) Soi it\n\nti\n\n[0 <a; (PUSH) get \u2018b d=0\n\nsyyit [0 < Jay\", (OP)pii LO) Sih it\n\nti\n\n[0 < tjay\u2019*(pusH) Sis, d>0"}, {"section_index": "12", "section_name": "A.6.1 S2S-Easy BASELINE", "section_text": "In our initial seq2seq baseline tests for ADDITION we represented the data for 90 + 160 = 250 as\nthe sequence: 90X 160X250 However, we found that such a model was not able to fit the training\ndata even when trained with 32 samples per number of digits. So we instead compared to the muct\nstronger S2S-Easy baseline presented in/Reed & de Freitas] (2016). This baseline makes it muct\neasier to learn addition through the following two modifications to the model: 1) reverse input digits.\nand 2) generate reversed output digits immediately at each time step, such that the data sequence\nlooks like: output: 052 input 1: 090 input 2: 061 This model is quite specific to the ADDITION task\n(and would not work on the NANOCRAFT task for instance) and results in a very strong baseline\nNone-the-less, as we showed in Figure[7] our model still significantly outperforms this baseline."}, {"section_index": "13", "section_name": "A.6.2. BOOTSTRAPPING", "section_text": "On the ADDITION task we found that both our model and the original NPI model were somewhat\nsensitive to the choice of initial seed. To test this sensitivity we ran our experiments for this task\nusing a bootstrapping process [1994). We ran all models using 100 different\nseeds for each model. We then sampled 25 seed subsets, with replacement. For each subset, we\nchoose the best seed using a validation set which was one-quarter the size of the original dataset,\nbut consisted only of 10-digit samples. We performed this resampling procedure 100 times, and in\nFigure[7|we report the mean and standard deviation across the resampled seed sets."}]
HJOZBvcel
[{"section_index": "0", "section_name": "LEARNING TO DISCOVER SPARSE GRAPHICAL MODELS", "section_text": "Eugene Belilovsky\nUniversity of Paris-Saclay, France\ngael.varoquaux@inria.fr"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Probabilistic graphical models provide a powerful framework for describing the dependencies betwee!\na set of variables. Many applications infer the structure of a probabilistic graphical model from dat\nto elucidate the relationships between variables. These relationships are often represented by at\nundirected graphical model also known as a Markov Random Field (MRF). We focus on a commot\nMRF model, Gaussian graphical models (GGMs). GGMs are used in structure-discovery settings fo\nrich data such as neuroimaging, genetics, or finance (Friedman et al.|/2008} Ryali et al} 2012} [Mohai\net al.| 2012} [Belilovsky et al. 2016). Although multivariate Gaussian distributions are well-behavec\ndetermining likely structures from few examples is a complex task when the data is high dimensiona\nIt requires strong priors, typically a sparsity assumption, or other restrictions on the structure of th\nsraph, which now make the distribution difficult to express analytically and use.\nA standard approach to estimating structure with GGMs in high dimensions is based on the classic\nresult that the zeros of a precision matrix correspond to zero partial correlation, a necessary and\nsufficient condition for conditional independence ). Assuming only a few conditional\ndependencies corresponds to a sparsity constraint on the entries of the precision matrix, leading to a\ncombinatorial problem. Many popular approaches to learning GGMs can be seen as leveraging the\nUniversity of Montreal, Canada\nmatthew.blaschko@esat.kuleuven.be"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We consider structure discovery of undirected graphical models from observational\ndata. Inferring likely structures from few examples is a complex task often requiring\nthe formulation of priors and sophisticated inference procedures. In the setting of\nGaussian Graphical Models (GGMs) a popular estimator is a maximum likelihood\nobjective with a penalization on the precision matrix. Adapting this estimator to\ncapture domain-specific knowledge as priors or a new data likelihood requires great\neffort. In addition, structure recovery is an indirect consequence of the data-fit\nterm. By contrast, it may be easier to generate training samples of data that arise\nfrom graphs with the desired structure properties. We propose here to leverage\nthis latter source of information as training data to learn a function mapping\nfrom empirical covariance matrices to estimated graph structures. Learning this\nfunction brings two benefits: it implicitly models the desired structure or sparsity\nproperties to form suitable priors, and it can be tailored to the specific problem of\nedge structure discovery, rather than maximizing data likelihood. We apply this\nframework to several real-world problems in structure discovery and show that it\ncan be competitive to standard approaches such as graphical lasso, at a fraction\nof the execution speed. We use convolutional neural networks to parametrize our\nestimators due to the compositional structure of the problem. Experimentally,\nour learnable graph-discovery method trained on synthetic data generalizes well:\nidentifying relevant edges in real data, completely unknown at training time. We\nfind that on genetics, brain imaging, and simulation data we obtain competitive\n(and generally superior) performance, compared with analytical methods.\nwhich can be seen as a penalized maximum-likelihood estimator. Here \u00a9 and DS are the precision and\nsample covariance matrices, respectively. A large variety of alternative regularization penalties extend\nthe priors of the graphical lasso Varoquaux et al. How\never, several problems arise in this approach. Constructing novel surrogates for structured-sparsity\nassumptions on MRF structures is challenging, as a prior needs to be formulated and incorporated\ninto a penalized maximum likelihood objective which then needs an efficient optimization algorithm\nto be developed, often within a separate research effort. Furthermore, model selection in a penalized\nmaximum likelihood setting is difficult as regularization parameters are often unintuitive.\nWe propose to learn the estimator. Rather than manually designing a specific graph-estimatio.\nprocedure, we frame this estimator-engineering problem as a learning problem, selecting a functio:\nfrom a large flexible function class by risk minimization. This allows us to construct a loss functior\nthat explicitly aims to recover the edge structure. Indeed, sampling from a distribution of graphs an\nempirical covariances with desired properties is often possible, even when this distribution is no\nanalytically tractable. As such we can perform empirical risk minimization to select an appropriat\nfunction for edge estimation. Such a framework gives more easy control on the assumed level o\nsparsity (as opposed to graph lasso) and can impose structure on the sampling to shape the expecte:\ndistribution, while optimizing a desired performance metric.\nFor particular cases we show that the problem of interest can be solved with a polynomial function\n\nwhich is learnable with a neural network (Andoni et al.||2014). Motivated by this fact, as well as\n\ntheoretical and empricial results on learning smooth functions approximating solutions to combinato\nrial problems (Cohen et al.|/2016}|Vinyals et al} 2015p, we propose to use a particular convolutional\nneural network as the function class. We train it by sampling small datasets, generated from graphs\nwith the prescribed properties, with a primary focus on sparse graphical models. We estimate from\nthis data small-sample covariance matrices (n < p), where n is the number of samples and p is the\ndimensionality of the data. Then we use them as training data for the neural network (Figure|2) where\ntarget labels are indicators of present and absent edges in the underlying GGM. The learned network\ncan then be employed in various real-world structure discovery problems.\nIn Section we review the related work. In Section] we formulate the risk minimization view of\ngraph-structure inference and describe how it applies to sparse GGMs. Section|2.\nmotivates the deep-learning architecture we chose to use for the sparse GGM prob\nIn SectionB]we describe the details of how we train an edge estimator for sparse GGMs. We then\nevaluate its properties extensively on simulation data. Finally, we show that this edge estimator trained\nonly on synthetic data can obtain state of the art performance at inference time on real neuroimaging\nand genetics problems, while being much faster to execute than other methods.\n5) analyze learning functions to identify the structure of directed graphica\nmodels in causal inference using estimates of kernel-mean embeddings. As in our work, they\ndemonstrate the use of simulations for training while testing on real data. Unlike our work, they\nprimarily focus on finding the causal direction in two node graphs with many observations.\nOur learning architecture is motivated by the recent literature on deep networks. (2015)\nhave shown that neural networks can learn approximate solutions to NP-hard combinatorial problems,\nand the problem of optimal edge recovery in MRFs can be seen as a combinatorial optimization\nproblem. Several recent works have been proposed which show neural architectures for graph input\ndata (Henaff et al. 2015} Duvenaud et all 2015} Li et al.| 2016). These are based on multi layer\nconvolutional networks, as in our work, or multi-step recurrent neural networks. The input in our\napproach can be viewed as a complete graph, while the ouput a sparse graph, thus none of these are\ndirectly applicable. A related use of deep networks to approximate a posterior distribution can be\n\napproximate steps of a known sparse recovery algorithm.\nBayesian approaches to structure learning rely on priors on the graph combined with sampling\ntechniques to estimate the posterior of the graph structure. Some approaches make assumptions\non the decomposability of the graph (Moghaddam et al.|[2009). The G-Wishart distribution is \u00ab\npopular distribution which forms part of a framework for structure inference, and advances have beer\nrecently made in efficient sampling (Mohammadi & Wit}/2015). These methods can still be rathes\nslow compared to competing methods, and in the setting of p > n we find they are less powerful.\nThe design of the estimator in Equation (a) is not explicitly minimizing this risk functional. Thu:\nmodifying the estimator to fit a different class of graphs (e.g. small-world networks) while minimizing\nR(f) is not obvious. Furthermore, in practical settings the optimal \\ is unknown and precisiot\nmatrix entries can be very small. We would prefer to directly minimize the risk functional. Desirec\nstructural assumptions on samples from P on the underlying graph, such as sparsity, may imply tha\nthe distribution is not tractable for analytic solutions. Meanwhile, we can often devise a sampling\nprocedure for P allowing us to select an appropriate function via empirical risk minimization. Thu:\nit is sufficient to define a rich enough F over which we can minimize the empirical risk over the\nsamples generated, giving us a learning objective over N samples {Y;,,\u00a9,}/_, drawn from P\n\nmin + hy Ufw (Ex), Ye). To maintain tractability, we use the standard cross-entropy loss as :\nwe \u2014\nconvex surrogate, | : RNe x Le, given by:\n\nI(fu(3),Y) = $0 (\u00a5\" log (fl (2) + (1 - \u00a5\") log(1 \u2014 fi). (4\nij\ngate, ( : IK*\u00ab x L's, given by:\nU(fu(S),\u00a5) = 0 (Y\" log( fiz (B)) + (1 \u2014 \u00a5\u00a5) log(1 \u2014 fi (8)\n\niy\nvila { al x5|\"V\\i,j\n1 a; L2;\\ay;;\nTONE ES ACS PUN VAM TD Av?\nHere |: \u00a3Ne x \u00a3Ne \u2014 R* is the loss function. For graphical model selection the 0/1 loss function is\nthe natural error metric to consider 0). The estimator with minimum risk is generally\nnot possible to compute as a closed form expression for most interesting choices of P, such as those\narising from sparse graphs. In this setting, Eq. (1) achieves the information theoretic optimal recovery\nrate up to a constant for certain P corresponding to uniformly sparse graphs with a maximum degree,\nbut only when the optimal A is used and the non-zero precision matrix values are bounded away from\nWe discuss how the described approach can be applied to recover sparse Gaussian graphical models\nA typical assumption in many modalities is that the number of edges is sparse. A convenient property\nof these GGMs is that the precision matrix has a zero value in the (i, 7)th entry precisely wher\nvariables i and j are independent conditioned on all others. Additionally, the precision matrix and\npartial correlation matrix have the same sparsity pattern, while the partial correlation matrix has\nnormalized entries.\nve propose to simulate our a priori assumptions of\nyarsity and Gaussianity to learn f,,(2), which can\nlen produce predictions of edges from the input data.\n\nfori ce {1,..,\nSample Gi\n\nJe model P(x|G) as arising from a sparse prior on panes\n\nie graph G' and correspondingly the entries of the\n\nrecision matrix \u00a9. To obtain a single sample of ent\n\u00a2 corresponds to n iid. samples from (0, @~1). Select Functi\n/e can now train f,,(%1) by generating sample pairs Optimize: mi\n\nfe\n\n, Y). At execution time we standardize the input\n\nata and compute the covariance matrix before evaluating f,, (S). Tl\nle sparse GGM is given in Algorithm}1| A weakly-informative spai\nige is equally likely with small probability, versus structured sparsi\nonfigurations. For obtaining the training samples (3, Y) in this case w\nrecision matrix, \u00a9, with the desired number of zero entries distribute\no this and assure the precision matrices lie in the positive definite con\niangular sparse matrix and then multiply it by its transpose. This pr\u00ab\nle experimental section. Alternatively, an MCMC based G-Wishart\n\nmployed if specific structures of the graph are desired (Lenkoski] /201\nwe can now train f(2.) by generating sample pairs Optimize: min 4 7;_, U(f(2x), Ye))\n\n(1, Y). At execution time we standardize the input fer\n\ndata and compute the covariance matrix before evaluating fu(). The process of learning f,, foi\nthe sparse GGM is given in Algorithm [I] A weakly-informative sparsity prior is one where eacl\nedge is equally likely with small probability, versus structured sparsity where edges have specific\nconfigurations. For obtaining the training samples (3, Y) in this case we would like to create a sparse\nprecision matrix, \u00a9, with the desired number of zero entries distributed uniformly. One strategy tc\ndo this and assure the precision matrices lie in the positive definite cone is to first construct an uppe!\ntriangular sparse matrix and then multiply it by its transpose. This process is described in detail ir\nthe experimental section. Alternatively, an MCMC based G-Wishart distribution sampler can be\nemployed if specific structures of the graph are desired (Lenkoskil|/2013).\nThe sparsity patterns in real data are often not uniformly distributed. Many real world networks\nhave a small-world structure: graphs that are sparse and yet have a comparatively short average\ndistance between nodes. These transport properties often hinge on a small number of high-degree\nnodes called hubs. Normally, such structural patterns require sophisticated adaptation when applying\nestimators like Eq. (i). Indeed, high-degree nodes break the small-sample, sparse-recovery properties\nof \u00a2,-penalized estimators 2011). In our framework such structural assumptions\nappear as a prior that can be learned offline during training of the prediction function. Similarly\npriors on other distributions such as general exponential families can be more easily integrated. As\nthe structure discovery model can be trained offline, even a slow sampling procedure may suffice.\nWe may ignore the denominator, D, as we are interested in I(p;_;)z = 0). Thus we are left with a\nrecursive formula that yields a high degree polynomial. From Andon\u2019 eta] Theorem 3.1)\nusing gradient descent, a neural network with only two layers can learn a polynomial function of\ndegree d to arbitrary precision given sufficient hidden units.\nRemark 1. Naively the polynomial from the recursive definition of partial correlation is of degree\nbounded by 2?~-2. In the worst case, this would seem to imply that we would need an exponentially\nAlgorithm 1 Training a GGM edge estimator\nIn this work we propose to use a neural network as our function f,,,. To motivate this let us consider\nthe extreme case when n > p. In this case \u00a9 ~ \u00a9 and thus entries of /~* or the partial correlation\nthat are almost equal to zero can give the edge structure.\n: 1\nPig|Z = (0:,j|Z\\z0 _ Pixz9|Z\\20P j,%01Z\\20) Fy\nFigure 1: (a) Illustration of nodes and edges \"seen\" at edge 4,13 in layer 1 and (b) Receptive field at\nlayer 1. All entries in grey show the o? , in covariance matrix used to compute 04 ,3. (c) shows the\ndilation process and receptive field (red) at higher layers\ngrowing number of hidden nodes to approximate it. However, this problem has a great deal 9\nstructure that can allow efficient approximation. Firstly, higher order monomials will go to zerc\nquickly with a uniform prior on p;,;, which takes values between 0 and 1, suggesting that in man)\ncases a concentration bound exists that guarantees non-exponential growth. Furthermore, the\nexistence result is shown already for a shallow network, and we expect a logarithmic decrease in the\nnumber of parameters to peform function estimation with a deep network (Cohen et al.\\ (2016).\nMoreover, there are a great deal of redundant computations in Eq. and an efficient dynamic\nprogramming implementation can yield polynomial computation time and require only low orde:\npolynomial computations with appropriate storage of previous computation. Similarly we would like\nto design a network that would have capacity to re-use computations across edges and approximate\nlow order polynomials. We also observe that the conditional independence of nodes i, 7 given Z car\nbe computed equivalently in many ways by considering many paths through the nodes Z. Thus we\n\ncan choose any valid ordering for traversing the nodes starting from a given edge.\nWe propose a series of shared operations at each edge. We consider a feedforward network wher\nsach edge i, j is associated with a fixed sized vector, of > of dimensionality d at each layer, k > (\n0? ; is initialized to the covariance entries at k = 0. For each edge we start with a neighborhood o\nhe 6 adjacent nodes, i, j, 1-1, i+1, j-1, j+1 for which we take all corresponding edge values from th\n-ovariance matrix illustrated in Figure[T] We proceed at each layer to increase the nodes considere\u00ab\nfor each edge, the output at each layer progressively increasing the receptive field making sure al\nvalues associated with the considered nodes are present. The receptive field here refers to the origina\n-ovariance entries which are accessible by a given, of; la tal 010) The equations defining th\norocess are shown in Figure |1| Here a neural network f,,\u00ab is applied at each edge at each layer and ;\nJilation sequence d;, is used. We call a network of this topology a D-Net of depth J. We use dilatiot\n1ere to allow the receptive field to grow fast, so the network does not need a great deal of layers. W\nmake the following observations:\nProposition 2. For general P it is a necessary condition for P-consistency that the receptive field o\nD-Net covers all entries of the covariance. %. at any edee it is applied.\nlayer 1, edge 4,13 \u2014\u2014\u2014~___ layer 2, edge 4,13\n\n1 mt 1\n4 4\n: \u2018 r 70 0 0 0\n\u00e9 \u2018 =Iw (07, js Oi-t,j OF,j-t> C41, j-1> 7\n1 1\n\u2014F o(ol. ol 1 1\n\n: 5 OF Fw? (05,51 nda,\u00bb inj-dy > Pind ,j-do* -)\n\u20180 \u20180\n\nLop ed bed 1-1 1-1\n2 2 O45 = fut (OFF Ona, j > jays Cindy j-dy*)\ni 4\n\non ~ 141 1\n4 4 Hig =o(w Oo; ;)\n\u20184 \u20184\nTETTTSTETONEDAE TET TTS TET UTEDN BT\ni S : _\n\n\u201ca De 3 j Pig\n4\n: of 0)\n6 =fwt (0), Oa 07 jt) C41 jr)\n4 1 1\n: os _\u2014 (o}. Oh dsj Oj, j-dys Oitdy,j-dy>~\u00b0)\nwo\n\n= im 1-1 oll\n2 Of =firyt (075 1 O%-d) 5 Oia, ,j-dy*)\n|\n~ 141 1\nz Gig =O(w'\" 04,5)\nT2345 678 901 1213 141516 We\n\n(by * ()\nAEN RDA DEE DEBE ERD AY BEER NORA NAD TRADING Ry dy CAB EEE REMEBER BE ED AP PEELE A\n\nProof. Consider nodes i and j and a chain graph such that 7 and j are adjacent to each other in the\nmatrix but are at the terminal nodes of the chain graph. One would need to consider all other variables\nto be able to explain away the correlation. Alternatively we can see this directly from expanding\nIntuitively adjacent edges have a high overlap in their receptive fields and can easily share information\nabout the non-overlapping components. This is analogous to a parametrized message passing. For\nexample if edge (i, 7) is explained by node k, as k enters the receptive field of edge (i, 7 \u2014 1),\nFigure 2: Diagram of the DeepGraph structure discovery architecture used in this work. The input is firs\nstandardized and then the sample covariance matrix is estimated. A neural network consisting of multiple dilatec\nconvolutions and a final 1 x 1 convolution layer is used to predict edges corresponding to non-zero entries in the\nprecision matrix.\nthe path through (i, j) can already be discounted. In terms of Eq.[5|this can correspond to storins\ncomputations that can be used by neighbor edges from lower levels in the recursion.\nHere f,,\u00ab is shared amongst all nodes and thus we can implement this as a special kind of convolutional\nnetwork. We make sure that to have considered all edges relevant to the current set of nodes in\nthe receptive field which requires us to add values from filters applied at the diagonal to all edges.\nIn Figure [I] we illustrate the nodes and receptive field considered with respect to the covariance\nmatrix. This also motivates a straightforward implementation using 2D convolutions (adding separate\nconvolutions at 7,2 and j, 7 to each 7, 7 at each layer to achieve the specific input pattern described)\nshown in (Figure[2).\nConsidering the general n > p case is illustrative. However, the main advantages of making the\ncomputations differentiable and learned from data is that we can take advantage of the sparsity and\nstructure assumptions on the target function to obtain more efficient results than naive computation of\npartial correlation or matrix inversion. As n decreases our estimate of /;,; becomes inexact and here\na data driven model which can take advantage of the assumptions on the underlying distribution can\nmore accurately recover the graph structure.\nThe convolution structure is dependent on the order of the variables used to build the covariance\nmatrix, which is arbitrary. Permuting the input data we can obtain another estimate of the output. In\nthe experiments, we leverage these various estimate in an ensembling approach, averaging the results\nof several permutations of input. We observe that this generally yields a modest increase in accuracy\nbut that even a single node ordering can show substantially improved performance over competing\nmethods in the literature."}, {"section_index": "3", "section_name": "3 EXPERIMENTS", "section_text": "Our experimental evaluations focus on the challenging high dimensional settings in which p > :\nand consider both synthetic data and real data from genetics and neuroimaging. In our experiment\nwe explore how well networks trained on parametric samples generalize, both to unseen syntheti\ndata and to several real world problems. In order to highlight the generality of the learned network:\nwe apply the same network to multiple domains. We train networks taking in 39, 50, and 500 nod\ngraphs. The former sizes are chosen based on the real data we consider in subsequent sections. W\nrefer to these networks as DeepGraph-39, 50, and 500. In all cases we have 50 feature maps of 3 x\nkernels. The 39 and 50 node network with 6 convolutional layers and d, = k + 1. For the 500 nod\nnetwork with 8 convolutional layers and dy, = 2*+!. We use ReLU activations. The last layer ha\n1 x 1 convolution and a sigmoid outputing a value of 0 to 1 for each edge.\nWe sample P(X |G) with a sparse prior on P(G) as follows. We first construct a lower diagonal\nmatrix, L, where each entry has a probability of being zero. Non-zero entries are set uniformly\nbetween \u2014c and c. Multiplying LL\u2019 gives a sparse positive definite precision matrix, \u00a9. This gives\nus our P(\u00a9|G) with a sparse prior on P(G). We sample from the Gaussian N\u2019(0, @~!) to obtain\nInput Data\n\n1\n\nStandardize\n\nEstimate\nCovariance\n\nTot\nam\na\n\nce\n100\n\nos\n\nTH\nast\n1\n\nur\n02%\n\u201c13\nan\nus\nan\nom\n\n06\nat\nan\n2s\naa\na\non\n\n=\n\u201cnt\n\u201c02\n\nour\nout\noie\nos\n\noan\n\nny\nrr)\n0\n\n\u201c0\n\na\n035\now\n0s\n\n1%\noo\n\noa\nrt)\nBY\n\n\u201c13\nuo\n\n18\n\u201c18\n\nFae\nLast\n\n=o\nbt\n\nTvs] out\n\nrtf\n\nTw\ni\n\nin\n\n10%\n\u2018us\nson} -a23\n\n\u201c205\n\u201c2\n\n\u201c10\n\u201c12 am 024\n\nww\non\n38\n\n-aa\nuo\na\n100\nInput Data\n\nTH THT] oo -025 Fad owe] 012 020\n\nStandardize } fa im a7 Lam not os oa\nta_t_io)-05 as\nwooo uD a =H\n\nEstimate [fos om a1 0m iy tit te\n\nCovariance \u201c4a ast 0m 0m our [a0 a0 A\nmt 02 om 00 -az}am im 1a Tm ney\naus cot out 025 oot [oo oms_aa]-a25 ow\n\nan ai -0@ om\nom am 0s nas\n\n\u201c205\n\n1m a2\n5-02 A OM 02k 100\nUltimately our choice of architecture that has shared computations and multiple layers is highly\nscalable as compared with a naive fully connected approach and allows leveraging existing optimized\n2-D convolutions. In preliminary work we have also considered fully connected layers but this proved\nto be much less efficient in terms of storage and scalibility than using deep convolutional networks.\nsamples of X. Here a corresponds approximately to a specific sparsity level in the final precisiot\nmatrix, which we set to produce matrices 92 \u2014 96% sparse and c chosen so that partial correlation:\nrange 0 to 1.\nSynthetic Data Evaluation To understand the properties of our learned networks, we evaluated\nthem on different synthetic data than the ones they were trained on. More specifically, we used a\ncompletely different third party sampler so as to avoid any contamination. We use DeepGraph-39 on\na variety of settings. The same trained network is utilized in the subsequent neuroimaging evaluations\nas well. DeepGraph-500 is also used to evaluate larger graphs.\nFor each scenario we repeat the experiment for 100 different graphs and small sample observations\nshowing the average area under the ROC curve (AUC), precision @k corresponding to 5% of possible\n\nedges, and calibration error (CE) (Mohammadi & Wit||2015).\nFor graphical lasso we use the partial correlations to indicate confidence in edges; BDGraph\nautomatically returns posterior probabilities as does our method. Finally to understand the effect\nof the regularization parameter we additionally report the result of graphical lasso under optimal\nregularizer setting on the testing data.\nOur method dominates all other approaches in all cases with p > n (which also corresponds to the\ntraining regime). For the case of random Gaussian graphs with n=35 (as in our training data), and\ngraph sparsity of 95%, we have superior performance and can further improve on this by averaging\npermutations. Next we apply the method to a less straightforward synthetic data, with distributions\ntypical of many applications. We found that, compared to baseline methods, our network performs\nparticularly well with high-degree nodes and when the distribution becomes non-normal. In particular\nour method performs well on the relevant metrics with small-world networks, a very common family\nof graphs in real-world data, obtaining superior precision at the primary levels of interest. Figure[3\nshows examples of random and Watts-Strogatz small-world graphs used in these experiments.\nTraining a new network for each number of samples can pose difficulties with our proposed method.\nThus we evaluted how robust the network DeepGraph-39 is to input covariances obtained from fewer\nor more samples. We find that overall the performance is quite good even when lowering the number\nof samples to n = 15, we obtain superior performance to the other approaches (Table|I). We also\napplied DeepGraph-39 on data from a multivariate generalization of the Laplace distribution\n\n[1998). As in other experiments precision matrices were sampled from the G-Wishart at a\nity of 95%. (Gomez etal 8| Proposition 3.1) was applied to produce samples. We find that\nDeepGraph-39 performs competitively, despite the discrepancy between train and test distributions.\nExperiments with variable sparsity are considered in the supplementary material, which find that\n\nfor very sparse graphs, the networks remain robust in performance, while for increased density\nperformance degrades but remains competitive.\nUsing the small-world network data generator (Peeters et al.|{2015), we demonstrate that we can\n\nupdate the generic sparse prior to a structured one. We re-train DeepGraph-39 using only 1000\nexamples of small-world graphs mixed with 1000 examples from the original uniform sparsity model\nWe perform just one epoch of training and observe markedly improved performance on this test case\nas seen in the last row of Table[I]\nEach network is trained continously with new samples generated until the validation error saturates\nFor a given precision matrix we generate 5 possible X samples to be used as training data, with <\notal of approximately 100/\u00a2 training samples used for each network. The networks are optimized\nising ADAM coupled with cross-entropy loss as the objective function (cf\nSec. |2.Tp. We use batch normalization at each layer. Additionally, we found that using the absolute\nvalue of the true partial correlations as labels, instead of hard binary labels, improves results.\nWe used the BDGraph R-package to produce sparse precision matrices based on the G-Wishart\ndistribution (Mohammadi & Wit 2015) as well as the R-package rags2ridges (Peeters et al.\nto generate data from small-world networks corresponding to the Watts\u2014Strogatz model (Watts\n\n& Strogatz) [1558}. We compared our learned estimator against the scikit-\u2014learn (Pedregosa et\nimplementation of Graphical Lasso with regularizer chosen by cross-validation as well as\n\nthe Birth-Death Rate MCMC (BDMCMC) method from/Mohammadi & Wit](2015).\nFor our final scenario we consider the very challenging setting with 500 nodes and only n = 5\nsamples. We note that the MCMC based method fails to converge at this scale, while graphical lass\nis very slow as seen in the timing performance and barely performs better than chance. Our metho\nconvincingly outperforms graphical lasso in this scenario. Here we additionally report precision <\njust the first 0.05% of edges since competitors perform nearly at chance at the 5% level.\nGaussian Random Graphs\n(n = 100, p = 39)\nGaussian Random Graphs\n(n = 15, p = 39)\nGaussian Small-World Graphs\n(n=35,p=39)\nGlasso (optimal)\nBDGraph\nDeepGraph-39\nDeepGraph-39+Perm\nTable 1: For each case we generate 100 sparse graphs with 39 nodes and data matrices sampled (with n samples)\nfrom distributions with those underlying graphs. DeepGraph outperforms other methods in terms of AP, AUC,\n\nand precision at 5% (the approximate true sparsity). In terms of precision and AUC DeepGraph has better\nperformance in all cases except n > p.\n\nWe ceamnitte the averace evecitan time af aur methad eamnared ta Crank T acen and RNCMeranh an a\nfrom distibuuons with those underlying graphs. Veepurapn outperforms other methods in terms OF AF, AUT\nand precision at 5% (the approximate true sparsity). In terms of precision and AUC DeepGraph has bette:\nperformance in all cases except n > p.\n\nWe compute the average execution time of our method compared to Graph Lasso and BDGraph on <\nCPU in Table[4] We note that we use a production quality version of graph lasso (Pedregosa et al\n(2011), whereas we have not optimized the network execution, for which known strategies may be\n\napplied\nExperimental Setup Method Prec@0.05% Prec@5% AUC CE\nrandom 0.052 \u00a3 0.002 |~0.053 \u00a3 0.000 | 0.500 \u00a3 0.000 | 0.05\n\nGlasso 0.156+0.010 | 0.055+0.001 | 0.501 + 0.000 | 0.05\n\nGaussian Random Graphs Glasso (optimal) 0.162 +0.010 | 0.055+0.001 | 0.501 + 0.000 | 0.05\n(n=50,p=500) DeepGraph-500 0.449 + 0.018 | 0.109+0.002 | 0.543 +0.002 | 0.06\nDeepGraph-500+Perm | 0.583 + 0.018 | 0.116 + 0.002 | 0.547 + 0.002 | 0.06\n\nTahle 9: Bvneriment an SON) nede oranhe with anly SO camnilec raneated 100 timec\nTable 2: Experiment on 500 node graphs with only 50 samples repeated 100 times. Figure 3:\n\n1 d Example o\nImproved performance in all metrics.\n\n(a) random and (b) smal\n\navarld weed in avneriment:\nCancer Genome Data We perform experiments on a gene expression dataset described in|Honorio\n(2012). The data come from a cancer genome atlas from 2360 subjects for various types of\ncancer. We used the first 50 genes from|Honorio et al.|(2012| Appendix C.2) of commonly regulated\ngenes in cancer. We evaluated on two groups of subjects, one with breast invasive carcinoma (BRCA)\nconsisting of 590 subjects and the other colon adenocarcinoma (CODA) consisting of 174 subjects.\nEvaluating edge selection in real-world data is challenging. We use the following methodology: for\neach method we select the top-k ranked edges, recomputing the maximum likelihood precision matrix\nwith support given by the corresponding edge selection method. We then evaluate the likelihood on a\nheld-out set of data. We repeat this procedure for a range of k. We rely on Algorithm 0 in|Hara &\nto compute the maximum likelihood precision given a support. The experiment is\nrepeated for each of CODA and BRCA subject groups 150 times. Results are shown in Figure[4] In\nall cases we use 40 samples for edge selection and precision estimation. We compare with graphical\n\nlasso as well as the Ledoit-Wolf shrinkage estimator (Ledoit & Wolf}|2004). We additionally consider\nthe MCMC based approach described in previous section. For graphical lasso and Ledoit-Wolf, edge\n\nselection is based on thresholding partial correlation (Balmand & Dalalyan| 2016).\nAdditionally, we evaluate the stability of the solutions provided by the various methods. In several\napplications a low variance on the estimate of the edge set is important. On Table [3] we report\nExperimental Setup Method Prec@0.05% Prec@5% AUC CE\nrandom 0.052 \u00a3 0.002 |~0.053 \u00a3 0.000 | 0.500 \u00a3 0.000 | 0.05\n\nGlasso 0.156+0.010 | 0.055+0.001 | 0.501 + 0.000 | 0.05\n\nGaussian Random Graphs Glasso (optimal) 0.162 +0.010 | 0.055+0.001 | 0.501 + 0.000 | 0.05\n(0,p=500) DeepGraph-500 0.449 + 0.018 | 0.109+0.002 | 0.543 +0.002 | 0.06\nDeepGraph-500+Perm | 0.583 + 0.018 | 0.116 + 0.002 | 0.547 + 0.002 | 0.06\nEdge Selection Colon adenocarcinoma Subjects\n\nEdge Selection Breast invasive carcinoma Subjects\n\nLog-Likehood Test Data\n\nDeepGraph\nDeepGraph+Permute\nlasso\n\nLog-Likehood Test Data\n\n\u2014 DeepGraph\n\u2014 DeepGraph+Permute\n\u2014 glasso\n\n~ Tedoit\n\u2014 bayesian]\n\n3 a0 cy ty Too\nEdges in support\n\nEdge Selection Autism Subjects\n\nEy a0 oo by 100 0\nEdges in support\n\nEdge Selection Control Subjects\n\nAverage Test Log-Likehood\n\u2018\n\n\u2018\n\nAverage Test Log-Likehood\n\u2018\n\n&\n\n0 20 30 a0 30\nEdges in Graph Support\n\n70\n\n10 Ey 30 a0 30\nEdges in Graph Support\nSpearman correlations between pairs of solutions, as it is a measure of a monotone link between twc\nvariables. DeepGraph has far better stability in the genome experiments and is competitive in the\nFMRI data.\n. Table 4: Avg. execution time over 10 trials for\nTable 3: Average Spearman correlation results for real data 59 and 500 node problem on a CPU for Graph\n\nshowing stability of solution amongst 50 trials Lasso, BDMCMC, and DeepGraph\n\nWe use the network DeepGraph-39, the same network and parameters from synthetic experiments,\nusing the same evaluation protocol as used in the genomic data. For both control and autism patients\nwe use time series from 35 random subjects to estimate edges and corresponding precision matrices.\nWe find that for both the Autism and Control group we can obtain edge selection comparable to graph\nlasso for very few selected edges. When the number of selected edges is in the range above 25 we\nbegin to perform significantly better in edge selection as seen in Fig. |4] We evaluated stability of the\nresults as shown in Tab.|3] DeepGraph outperformed the other methods across the board.\nABIDE has high variability across sites and subjects. As a result, to resolve differences between\napproaches, we needed to perform 1000 folds to obtain well-separated error bars. We found that the\nbirth-death MCMC method took very long to converge on this data, moreover the need for many folds\nto obtain significant results amongst the methods made this approach prohibitively slow to evaluate\nFigure 4: Average test likelihood for COAD and BRCA subject groups in gene data and neuroimaging data\nusing different number of selected edges. Each experiment is repeated 50 times for genetics data. It is repeated\napproximately 1500 times in the {MRI to obtain significant results due high variance in the data. DeepGraph\nwith averaged permutation dominates in all cases for genetics data, while DeepGraph+Permutation is superior or\nequal to competing methods in the fMRI data.\nResting State Functional Connectivity We evaluate our graph discovery method to study brain\nfunctional connectivity in resting-state {MRI data. Correlations in brain activity measured via\nfMRI reveal functional interactions between remote brain regions. These are an important mea-\nsure to study psychiatric diseases that have no known anatomical support. Typical connec-\ntome analysis describes each subject or group by a GGM measuring functional connectivity be-\ntween a set of regions . We use the ABIDE dataset\ntino et all |2014), a large scale resting state [MRI dataset. It gathers brain scans from 539 in-\ndividuals suffering from autism spectrum disorder and 573 controls over 16 sites|'| For our\nexperiments we use an atlas with 39 regions of interest derived in\nGene BRCA Gene COAD [ABIDE Control | ABIDE Autistic ]\nGraph Lasso 0.25 = .003 = 0.004 0.21 = .003 0.21 = .003\n\nLedoit-Wolfe 0.12 + 0.002 0.003 | 0.13 + .003\nBaoranh noanoam nano T/A\nOur method was competitive with strong baselines. Even in cases that deviate from standard GGN\nsparsity assumptions (e.g. Laplacians, small-world) it performed substantially better. When fin\ntuning on the target distribution performance further improves. Most importantly the learned estimate\ngeneralizes well to real data finding relevant stable edges. We also observed that the learned estimator\ngeneralize to variations not seen at training time (e.g. different n or sparsity), which points to thi\npotentialy learning generic computations. This also shows potential to more easily scale the metho\nto different graph sizes. One could consider transfer learning, where a network for one size of data 1\nused as a Starting point to learn a network working on larger dimension data.\nPenalized maximum likelihood can provide performance guarantees under restrictive assumptions on\nthe form of the distribution and not considering the regularization path. In the proposed method one\ncould obtain empirical bounds under the prescribed data distribution. Additionally, at execution time\nthe speed of the approach can allow for re-sampling based uncertainty estimates and efficient model\nselection (e.g. cross-validation) amongst several trained estimators.\nWe have introduced the concept of learning an estimator for determining the structure of an undirectec\ngraphical model. A network architecture and sampling procedure for learning such an estimato:\nfor the case of sparse GGMs was proposed. We obtained competitive results on synthetic data with\nvarious underlying distributions, as well as on challenging real-world data. Empirical results shov\nthat our method works particularly well compared to other approaches for small-world networks, ar\nimportant class of graphs common in real-world domains. We have shown that neural networks car\nobtain improved results over various statistical methods on real datasets, despite being trained with\nsamples from parametric distributions. Our approach enables straightforward specifications of nev\npriors and opens new directions in efficient graphical structure discovery from few examples."}, {"section_index": "4", "section_name": "ACKNOWLEDGEMENTS", "section_text": "Figure 5: Example solution from DeepGraph and Graph Lasso in the small sample regime on the same 35\nsamples, along with a larger sample solution of Graph Lasso for reference. DeepGraph is able to extract similar\nkey edges as graphical lasso\nkey edges as graphical lasso\n\nWe show the edges returned by Graph Lasso and DeepGraph for a sample from 35 subjects (Fig. [5p\nin the control group. We also show the result of a large-sample result based on 368 subjects from\ngraphical lasso. In visual evaluation of the edges returned by DeepGraph we find that they closely\nalign with results from a large-sample estimation procedure. Furthermore we can see several edges in\nthe subsample which were particularly strongly activated in both methods.\nThis work is partially funded by Internal Funds KU Leuven, FP7-MC-CIG 334380, DIGITEO 2013-\n0788D - SOPRANO, and ANR-11-BINF-0004 NiConnect. We thank Jean Honorio for providing\npre-processed Cancer Genome Data.\nAlexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. Learning polynomials with neural networks.\n1. rls FAIA\nTony Cai, Weidong Liu, and Xi Luo. A constrained \u00a31 minimization approach to sparse precision matri:\nestimation. Journal of the American Statistical Association, 106(494):594\u2014607, 2011.\nTable 5: Covariance prediction of ABIDE data. Averaged over 50 trials of 35 samples from the ABIDE Contre\nExperimental Setup Method Prec@5% AUC CE\nGlasso 0.464 = 0.038 | 0.726 = 0.021 | 0.02\n\nGlasso (optimal) | 0.519 + 0.035 | 0.754 + 0.019 | 0.02\n\nGaussian Random Graphs BDGraph 0.587 + 0.033 | 0.811 + 0.017 | 0.15\n(n=35,p=39,sparsity=2%) DeepGraph-39 | 0.590 + 0.026 | 0.810 + 0.019 | 0.03\nDeepGraph-39+Perm | 0.598 + 0.026 | 0.831 + 0.017 | 0.03\n\nGlasso 0.732 = 0.046 | 0.562 = 0.013 | 0.32\n\nGlasso (optimal) | 0.847 + 0.029 | 0.595 + 0.011 | 0.33\n\nGaussian Random Graphs BDGraph 0.861 + 0.015 | 0.654 + 0.013 | 0.33\n(n=35,p=39,sparsity=15%) | DeepGraph-39 \u2014_| 0.678 + 0.032 | 0.643 + 0.012 | 0.33\nDeepGraph-39+Perm | 0.792 + 0.023 | 0.660 + 0.011 | 0.33\nGaussian Random Graphs\n\n(n=35,p=39,sparsity=15%)\nTable 6: For each scenario we generate 100 graphs with 39 nodes, and corresponding data matrix sampled from\ndistributions with those underlying graphs. The number of samples is indicated by n.\nUsing our framework it is possible to attempt to directly predict an accurate covariance matrix given a noisy one\nconstructed from few observations. This is a more challenging task than predicting the edges. In this section we\nshow preliminay experiments which given an empirical covariance matrix from few observations attempts tc\npredict a more accurate covariance matrix that takes into account underlying sparse data dependency structure\nWe evaluate this network using the ABIDE dataset described in Section]3] The ABIDE data has a large number of\namples allowing us to obtain a large sample estimate of the covariance and compare it to our estimator as well as\ngraphical lasso and empirical covariance estimators. Using the large sample ABIDE empirical covariance matrix.\nWe find that we can obtain competitive \u00a22 and \u00a3.. norm using few samples. We use 403 subjects from the ABIDE\nControl group each with a recording of 150 \u2014 200 samples to construct covariance matrix, totaling 77 330\nsamples (some correlated). This acts as our very approximate estimate of the population \u00a9. We then evaluate\ncovariance estimation on 35 samples using the empirical covariance estimator, graphical lasso, and DeepGraph\ntrained to output covariance matrices. We repeat the experiment for 50 different subsamples of the data. We see\ninf5|that the prediction approach can obtain competitive results. In terms of \u00a22 graphical lasso performs better,\nhowever our estimate is better than empirical covariance estimation and much faster then graphical lasso. In\nsome applications such as robust estimation a fast estimate of the covariance matrix (automatically embedding\nsparsity assumptions) can be of great use. For \u00a3.. error we see the empirical covariance estimation outperforms\ngraphical lasso and DeepGraph for this dataset, while DeepGraph performs better in terms of this metric."}, {"section_index": "5", "section_name": "A.2 ADDITIONAL SYNTHETIC RESULTS ON SPARSITY", "section_text": "We investigate the affect of sparsity on DeepGraph-39 which has been trained with input that has sparsity\n96% \u2014 92% sparse. We find that DeepGraph performs well at the 2% sparsity level despite not seeing this\nat training time. At the same time performance begins to degrade for 15% but is still competitive in several\n\ncategories. The results are shown in Table| gation can consider how alternate variation of sparsity\nat training time will affect these results."}, {"section_index": "6", "section_name": "A.3 APPLICATION OF LARGER NETWORK ON SMALLER INPUT", "section_text": "We perform preliminary investigation of application of a network trained for a larger number of nodes to a\nsmaller set of nodes. Specifically, we consider the breast invasive carcinoma groups gene data. We now take all\n175 valid genes from Appendix C.2 of|Honorio et al. . We take the network trained on 500 nodes in the\nsynthetic experiments section. We use the same experimental setup as in the gene experiments. The 175 x 175\nOne challenge is that outputs of our covariance predictor must be on the positive semidefinite cone, thus we\nchoose to instead predict on the cholesky decompositions, which allows us to always produce positive definite\nvariances. We train a similar structure to DeepGraph-39 structure modifying the last layer to be fully connected\ninear layer that predicts on the cholesky decomposition of the true covariance matrices generated by our model\nvith a squared lo:\nWe note these results are preliminary, as the covariance predicting networks were not heavily optimized, moreover\nthe ABIDE dataset is very noisy even when pre-processed and thus even the large sample covariance estimate\nmay not be accurate. We believe this is an interesting alternate application of our paper.\nLog-Likehood Test Data\n\nEdge Selection Breast invasive carcinoma Subjects\n\n234\n\u2014 DeepGraph \u2014 ledoit\n\u2014 DeepGraph+Permute = \u2014 random\n2361) \u2014 giasso\n\n-238\n\n240\n\n242|\n\n244\n\n246\noapoone HE EH Hf\nTe A\n-248 b= PEEHE e EES.\n250\n\n20 40 60 80 700 120\nLog-Likehood Test Data\n\n236\n\n-238\n\n240\n\n242|\n\n244\n\n\u2014 veepsrapn\n\u2014 DeepGraph+Permute\n\u2014_ glasso\n\nfecor\nrandom\n\n246\necooaood\ncepaaes Seeee sere Oa aaa aaas\n248 b=\n250\n20 40 60 80 700\n\nEdges in support\n\n120\nFigure 6: Average test likelihood over 50 trials of applying a network trained for 500 nodes, used on a 175 node\nproblem"}, {"section_index": "7", "section_name": "A.4. PERMUTATION AS ENSEMBLE METHOD", "section_text": "permuting the input and averaging several permutations can produce an improvec\nresult empirically. We interpret this as a typical ensembling method. This can be an advantage of the propose\u00a2\narchitecture as we are able to easily use standard ensemble techniques. We perform an experiment to furthe:\nverify that indeed the permutation of the input (and subsequent inverse permutation) allows us to produc\u00a2\nparate classifiers that have uncorrelated errors.\nWe use the setup from the synthetic experiments with DeepGraph-39 in Section|3|with n = 35 and p = 39\nWe construct 20 permutation matrices as in the experimental section. Treating each as a separate classifie1\nwe compute the correlation coefficient of the errors on 50 synthetic input examples. We find that the average\ncorrelation coefficient of the errors of two classifie1 0.028 + 0.002, suggesting they are uncorrelated. Finally\nwe note the individual errors are relatively small, as can already be inferred from our extensive experimental\nresults in Section|3] We however compute the average absolute error of all the outputs across each permutation\n\nfor this set of inputs as 0.03, notably the range of outputs is 0 to 1. Thus since prediction error differ at each\npermutation but are accurate we can average and yield a lower total prediction error.\nFinally we note that our method is extremely efficient computationally thus averaging the results of severa\npermutations is practical even as the graph becomes large.\ncovariance matrix from 40 samples and padded to the appropriate size. We observe that DeepGraph has similar\nperformance to graph lasso while permuting the input and ensembling the result gives substantial improvement."}]
BJVEEF9lx
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Recent progress in artificial intelligence is driven by the ability to learn representations from data\nYet not all kinds of representations are equal, and many of the fundamental properties of representa:\ntions (both as theoretical constructs and as observed experimentally in humans) are missing. Perhap:\nthe most critical property of a system of representations is compositionality, which as described suc:\ncinctly in is when (i) it contains both primitive symbols and symbols tha\nare complex; and (ii) the latter inherit their syntactic/semantic properties from the former. Compo:\nsitionality is powerful because it enables a system of representation to support an infinite number o:\nsemantically distinct representations by means of combination. This argument has been supportec\nexperimentally; a growing body of evidence has shown that humans pos\nsess a small number of primitive systems of mental representation - of objects, agents, number anc\ngeometry - and new representations are built upon these core foundations.\nRepresentations learned with modern machine learning methods possess few or none of these prop-\nerties, which is a severe impediment. For illustration consider that navigation depends upon some\nrepresentation of geometry, and yet recent advances such as end-to-end autonomous driving (Bo-\n(2016) side-step building explicit geometric representations of the world by learning to\nmap directly from image inputs to motor commands. Any representation of geometry is implicit,\nand has the advantage that it is economical in only possessing information necessary for the task.\nHowever, this form of representation lacks (i) the ability to reuse these representations for other\nrelated tasks such as predicting object stability or performing mental rotation, (ii) the ability to com-\npose these representations with others, for instance to represent a set or count of geometric objects.\nand (iii) the ability to perform explicit inference using representations, for instance to infer why a\nparticular route would be faster or slower.\nThis contribution provides a computational model of mental representation which inherits the com-\npositional and productivity advantages of symbolic representations, and the data-driven and eco-\nnomical advantages of representations learned using deep learning methods. To this end, we model\nmental representations as a form of data-structure, which by design possess various forms of com-\npositionality. In addition, in step with deep learning methods we refrain from imposing a particular\nrepresentations on a system and allow it instead be learned. That is, rather than specify a concrete\ndata type (for example polygons or voxels for geometry), we instead define a class of representations\nas abstract data types, and impose invariants, or axioms, that any representation must adhere to.\nMathematicians have sought an axiomatic account of our mental representations since the end of\nthe nineteenth century, but both as an account of human mental representations, and as a means of\nspecifying representations for intelligent systems, the axiomatic specifications suffer from a number"}, {"section_index": "1", "section_name": "LEARNING APPROXIMATE DISTRIBUTION-SENSITIVE\nDATA STRUCTURES", "section_text": "Armando Solar Lezama\nasolar@csail.mit.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "of problems. Axioms are universally quantified - for all numbers, sets, points, etc - while human:\nin contrast, are not uniformly good at manipulating numbers of different magnitude 201]\n[Nuerk & Willmes} [2005} [Dehaene| ), rotating geometry of different shapes (Izard et al.{/201T\nor sets of different cardinality. Second, axioms have no algorithmic content; they are declarativ\ntules which do not suggest how to construct concrete representations that satisfy them. Third, onl\nsimple systems have reasonable axioms, whereas many representations are complex and cannc\nin practice be fully axiomitized; conventional axiomatic specifications do not readily accomme\ndate partial specification. A fourth, potentially fatal threat is offered by [Dehaene|\nshows that there are infinitely many systems, most easily dismissed by even a child as clearly nc\nnumber-like, which satisfy Peano\u2019s axioms of arithmetic. Moreover these \u2019nonstandard models \u00a2\narithmetic\u201d can never be eliminated by adding more axioms, leading Dehaene to conclude \u201dHenc\u00ab\nour brain does not rely on axioms.\u201d.\nWe extend, rather than abandon, the axiomatic approach to specifying mental representations, and\nemploy it purely as mechanism to embed domain specific knowledge. We model a mental represen-\ntation as an implementation of an abstract data type which adheres approximately to a probabilistic\naxiomatic specification. We refer to this implementation as a distribution-sensitive data-structure.\nIn summary, in this paper:\nempty : otack\npush : Stack x Item + Stack\npop : Stack \u2014 Stack x Item\nisempty : Stack \u2014 {0,1}\nThe meaning of the constants and functions is not specified in the interface. To give meaning to\nthese names, we supplement the abstract data type with a specification as a set of axioms. The\n\nspecification as a whole is the logical conjunction of this set of axioms. Continuing our example,\nfor all s \u00a9 Stack. i \u00a9 Item:\npop(push(s,i)) = (s, 7)\nisempty(empty) = 1\nisempty(push(s,i)) = 0\npop(empty) = L\nWe introduce probabilistic axiomatic specifications as a quantifier-free relaxation of a con-\nventional specification, which replaces universally quantified variables with random vari-\nables.\n\nSynthesis of a representation is formulated as synthesis of functions which collectively\nsatisfy the axioms. When the axioms are probabilistic, this is amounts of maximizing the\nprobability that the axiom is true.\n\nWe present a number of methods to approximate a probabilistic specification, reducing it\nto a continuous loss function.\n\nWe employ neural networks as function approximators, and through gradient based opti-\nmization learn representations for a number of fundamental data structures.\nAbstract data types model representations as a set of types and functions which act on values of\nthose types. They can also be regarded as a generalized approach to algebraic structures, such as\nlattices, groups, and rings. The prototypical example of an abstract data type is the Stack, which\nmodels an ordered, first-in, last-out container of items. We can abstractly define a Stack of Items,\nin part, by defining the interface:\nempty : Stack\npush : Stack x Item \u2014 Stack\npop : Stack + Stack x Item\nsempty : Stack + {0,1}\nThe interface lists the function names and types (domains and range). Note that this is a functional\n(rather than imperative) abstract data type, and each function in the interface has no internal state.\nFor example, push is a function that takes an instance of a Stack and an Item and returns a Stack.\nempty : Stack denotes a constant of type Stack, the empty stack of no items.\npop(push(s,i)) = (s, 7)\nisempty(empty) = 1\nsempty(push(s,i)) = (\npop(empty) = L\nAn important property of an abstract data types which supports algorithmic compositionality is\nencapsulation. Encapsulation means that the particular details of how the functions are implemented\nshould not matter to the user of the data type, only that it behaves as specified. Many languages\nenforce that the internals are unobservable, and that the data type can only be interacted with through\nits interface. Encapsulation means that data-structures can be composed without reasoning about\ntheir internal behavior.\nIn this paper however, we focus on parametric compositionality. Some data structures, in particular\ncontainers such as a stack, or set, or tree are parametric with respect to some other type, e.g. the\ntype of item. Parametric compositionality means for example that if we have a representation of <\nset, and a representation of a number, we get a set of numbers for free. Or, given a representations\n\nfor a tree and representations for Boolean logic, we acquire the ability to form logical expressiot\nfor free.\nAxiomatic specifications almost always contain universal quantifiers. The stack axioms are quan\ntified over all possible stacks and all possible items. Real world use of a data structure is howeve\nnever exhaustive, and rarely uniform. Continuing our stack example, we will never store an infinit\nnumber of items, and the distribution over how many items are stored, and in which order relative t\neach other, will highly non-uniform in typical use cases. Conventional data structures are agnosti\nto these distributional properties.\nData structures that exploit non-uniform query distributions are typically termed distribution-\nsensitive ), and are often motivated by practical concerns since queries observed in\nreal-world applications are not uniformly random. An example is the optimum binary search tree on\nn keys, introduced by Knuth , which given a probability for each key has an av-\nerage search cost no larger than any other key. More generally, distribution-sensitive data structures\nexploit underlying patterns in a sequence of operations in order to reduce time and space complexity.\nTo make the concept of a distribution-sensitive data-structure precise, we first develop the concept of\nan probabilistically axiomatized abstract data type (T, O, F), which replaces universally quantified\nvariables in its specification with random variables. J\u2019 and O are respectively sets of type and\ninterface names. F is a set of type specifications, each taking the form m : 7 for a constant of type\nT,Oro: 7, \u2014 To denoting a function from 7; to T. Here tT \u20ac T ora Cartesian product 7; x--- xT,\nA concrete data type o implements an abstract data type by assigning a value (function or constant\nto each name in O. A concrete data type is deemed a valid implementation only with respect to ar\nalgebraic specification A. A is a set of equational axioms of the form p = q, p and q are constants.\nrandom variables, or transformations of random variables by functions in O.\nSince a transformation of a random variable yields a random variable, and an axiom is simply a\npredicate of its left and right hand side arguments, random variables present in an axiom implies\nthat the axiom itself is a Boolean valued random variable. For example if we have a distribution\nover items 7 of the stack, axiom (1) itself is a random variable which is true or false depending on i,\npush, pop, and can only be satisfied with some probability. We let P| A(o)| denote the probability\nA concrete representation of a stack is a data structure which assigns constants and functions to the\nnames empty, push, pop and isempty. The data structure is a stack if and only if it satisfies the\nspecification.\nThere are a number of distinct forms of compositionality with respect to data structures. One ex-\nample is algorithmic compositionality, by which we can compose algorithms which use as primitive\noperations the interfaces to these representations. These algorithms can in turn form the interfaces\nto other representations, and so on.\nProbabilistic axioms do not imply that the concrete data-structure itself is probabilistic. On the\ncontrary, we are concerned with specifying and synthesizing deterministic concrete data structures\nwhich exploit uncertainty stemming only from the patterns in which the data-structure is used.\nEach type 7 \u20ac T will correspond to a finite dimensional real valued multidimensional array R\u201d.\nInterface functions are continuous mappings between these arrays."}, {"section_index": "3", "section_name": "UNROLL AXIOMS", "section_text": "Axiom (1) of the stack is intensional in the sense that it refers to the underlying stack s. This provides\nan inductive property allowing us to fully describe the behavior of an unbounded number of push\nand pop operations with a single equational axiom. However, from an extensional perspective, we\ndo not care about the internal properties of the stack; only that it behaves in the desired way. Put\nplainly, we only care that if we push an item 7 to the stack, then pop, that we get back 7. We do\nnot care that the stack is returned to its initial state, only that it is returned to some state that will\ncontinue to obey this desired behavior.\nAn extensional view leads more readily to approximation; since we cannot expect to implement a\nstack which satisfies the inductive property of axiom | if it is internally a finite dimensional vector.\nInstead we can unroll the axiom to be able to stack some finite number of n items:"}, {"section_index": "4", "section_name": "APPROXIMATE DISTRIBUTIONS WITH DATA", "section_text": "We approximate random variables by a finite data distribution assumed to be a representative se\nof samples from that distribution. Given an axiom p = gq, we denote p and g as values (arrays:\ncomputed by evaluating p and q respectively with concrete data from the data distributions of randon\nvariables and the interface functions.\nWe relax equality constraints in axioms to a distance function, in particular the L2 norm. This\ntransforms the equational axioms into a loss function. Given i axioms, the approximate maximum\nlikelihood concrete data type G* is then:\nConstants and parameterized functions (e.g. neural networks) which minimizes this loss function\nthen compose a distribution-sensitive concrete data type.\n= PiAipi = ail\nWhen P[A(c)| = 1, o can be said to fully satisfy the axioms. More generally, with respect to a\nspace \u00a9 of concrete data types, we denote the maximum likelihood o* as one which maximizes the\nprobability that the axioms hold:\no* = arg me\ngmax P[A(o)|\nA probabilistic specification is not easier to satisfy than a universally quantified one, but it can\nlend itself more naturally to a number of approximations. In this section we outline a number of\nrelaxations we apply to a probabilistic abstract data type to make synthesis tractable.\n5\n= ar;\ng\nmin >)\nes\nail"}, {"section_index": "5", "section_name": "5 EXPERIMENTS", "section_text": "We successfully synthesized approximate distribution-sensitive data-structures from a number of\nabstract data types:\nNatural number (from Peano\u2019s axioms)\nStack\n\nQueue\n\nSet\n\nBinary tree\nWith the exception of natural number (for which we used Peano\u2019s axioms), we use axiomitizations\nfrom As described in section 4, since we use finite dimensional representa-\ntions we unroll the axioms some finite number of times (e.g., to learn a stack of three items rathet\nthan it be unbounded) and \u201dextensionalize\u201d them.\nIn each example we used we used single layer convolutional neural networks with 24, 3 by 3 filters\nand rectifier non-linearities. In container examples such as Stack and Queue, the Item type was\nsampled from MNIST dataset, and the internal stack representation was chosen (for visualization) to\nalso be a 28 by 28 matrix. We minimized the equational distance loss function described in section 3\nusing the adam optimization algorithm, with a learning rate of 0.0001 In figures 1 and 2 we visualize\nthe properties of the learned stack.\nTo explore compositionality, we also learned a Stack, Queue and Set of Number, where Number\nwas itself a data type learned from Peano\u2019s axioms.\nstack\n\npop\nFigure 1: Validation of stack trained on MNIST digits, and introspection of internal representation.\nRow push shows images pushed onto stack from data in sequence. Row pop shows images taken\nfrom stack using pop function. Their equivalence demonstrates that the stack is operating correctly.\nRow stack shows internal representation after push and pop operations. The stack is represented as\nan image of the same dimension as MNIST (28 by 28) arbitrarily. The stack learns to compress three\nimages into the the space of one, while maintaining the order. It deploys an interesting interlacing\nstrategy, which appears to exploit some derivative information.\nThe learned internal representations depend on three things (i) the axioms themselves, (ii) the archi-\ntecture of the networks for each function in the interface, and (iii) the optimization procedure. In the\nstack example, we observed that if we decreased the size of the internal representation of a stack, we\nwould need to increase the size and complexity of the neural network to compensate. This implies\nthat statistical information about images must be stored somewhere, but there is some flexibility over\nwhere.\nFigure 2: Generalization of the stack. Top left to top right, 10 images stacked in sequence using\npush. Bottom right to bottom left: result from calling pop on stack 10 times. This stack was trained\nto stack three digits. It appears to generalize partially to four digits but quickly degrades after that\nSince the stack is finite dimensional, it is not possible for it to generalize to arbitrarily long sequences\nof push operations.\nFigure 3: Left: Stack versus queue encoding. Three MNIST images (top row) were enqueued onto\nthe empty queue (middle row left), and pushed onto the empty stack (bottom row left). Middle row\nshows the internal queue representation after each enqueue operation, while bottom is the internal\nstack representation after each push. In this case, the learned stack representation compresses pixel\nintensities into different striated sections of real line, putting data about the first stacked items at\nlower values and then shifting these to higher values as more items are stacked. This strategy appears\ndifferent from that in figure 1, which notably was trained to a lower error value. The internal queue\nrepresentation is less clear; the hexagonal dot pattern may be an artifact of optimization or critical\nto its encoding. Both enqueue and push had the same convolutional architecture. Right: Internal\nrepresentations of natural numbers from 0 (top) to 19 (bottom). Natural numbers are internally\nrepresented as a vector of 10 elements. Number representations on the left are found by repeateding\nthe succesor function, e.g. (succ(zero), succ(succ(zero)), ...). Numbers on the right are found by\nencoding machine integers into this internal representation.\nGiven the same architecture, the system learned different representations depending on the axioms\nand optimization. The stack representation learned in figure 1 differs from that in figure 3, indicating\nthat there is not a unique solution to the problem, and different initialization strategies will yielc\ndifferent results. The queue internal representation is also different to them both, and the encoding\nis less clear. The queue and stack representations could have been the same (with only the interface\nfunctions push, pop, queue and dequeue taking different form).\nAs shown in figure 2, data-structures exhibit some generalization beyond the data distributions or\nwhich they are trained. In this case, a stack trained to store three items, is able to store four witt\nsome error, but degrades rapidly beyond that. Of course we cannot expect a finite capacity represen.\n\ntation to store an unbounded number of items; lack of generalization is the cost of having optimizec\nperformance on the distribution of interest.\nOur contribution builds upon the foundations of distribution-sensitive data structures\n2013), but departs from conventional work on distribution-sensitive data structures in that: (i) we\nsynthesize data structures automatically from specification, and (ii) the distributions of interest are\ncomplex data distributions, which prevents closed form solutions as in the optimum binary tree.\nOur approach to learning representation can be viewed as a form of data-type synthesis from speci-\nfication. From the very introduction of abstract data types, verification that a given implementation\nsatisfies its specification was a motivating concern (Guttag et al.|\n(1975). Modern forms of function synthesis (S\n(2016) use verification as an oracle to assist with synthesis. Our approach in a broad sense is\nsimilar, in that derivatives from loss function which is derived from relaxing the specification, guide\nthe optimization through the paramterized function spaces.\nProbabilistic assertions appear in first-order lifting 2003), and Sampson\n\nintroduce probabilistic assertions. Implementation of data type is a program. Main difference\n\nis that we synthesize data type from probabilistic assertion. Sumit\u2019s work (Sankaranarayanan}|2014\n\nseeks upper and lower bounds for the probability of the assertion for the programs which operate or\nuncertain data.\nRecent work in deep learning has sought to cmon discrete data structures into continuous form.\nExamples are the push down automata (Sun et al.| eh. networks containing stacks (Grefenstette\n\n, and memory networks (Sukhbaatar et al. . Our approach can be used to synthe-\ne trary data-structure, purely from its ae at is parameterized by the neural network\nstructure. This permits it more generality, with a loss of efficiency."}, {"section_index": "6", "section_name": "8 DISCUSSION", "section_text": "In this contribution we presented a model of mental representations as distribution sensitive data\nstructures, and a method which employs neural networks (or any parameterized function) to syn-\nthesize concrete data types from a relaxed specification. We demonstrated this on a number of\nexamples, and visualized the results from the stack and queue.\nOne of the important properties of conventional data structures is that they compose; they can be\ncombined to form more complex data structures. In this paper we explored a simple form of para-\nmetric composition by synthesizing containers of numbers. This extends naturally to containers of\ncontainers, .e.g sets of sets, or sets of sets of numbers. Future work is to extend this to richer forms\nof composition. In conventional programming languages, trees and sets are often made by compos-\ning arrays, which are indexed with numbers. This kind of composition Is fundamental to building\ncomplex software from simple parts."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Yoshua Bengio. Learning Deep Architectures for AI, volume 2. 2009. ISBN 2200000006. doi:\n10.1561/2200000006.\nVarious forms of machine learning and inference learn representations of data. Our approach bears\nresemblance to the auto-encoder , which exploits statistics of a data distribution to\nlearn a compressed representation as a hidden layer of a neural network. As in our approach, an\nauto-encoder is distribution sensitive by the constraints of the architecture and the training proce-\ndure (the hidden layer is of smaller capacity than the data and which forces the exploitation of\nregularities). However, an auto-encoder permits just two operations: encode and decode, and has no\nnotion explicit notion of compositionality.\nA step closer to our approach than the auto-encoder are distributed representations of words as\n\ndeveloped in (Mikolov et al.||2000). These representations have a form of compositionality such\n\nthat vector arithmetic on the representation results in plausible combinations (Air + Canada = Air-\nCanada).\nIn this work we learned representations from axioms. Humans, in contrast, learn representations\nmostly from experience in the world. One rich area of future work is to extend data-structure learning\nto the unsupervised setting, such that for example an agent operating in the real world would learn a\ngeometric data-structures purely from observation.\nMariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoot\nGoyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao\nand Karol Zieba. End to End Learning for Self-Driving Cars. arXiv:1604, pp. 1-9, 2016. URL\nhtto://arxiv.orag/abs/1604.07316\nStanislas Dehaene. The Number sense, volume 53. 1997. ISBN 9780199753871. doi: 10.1017/\nCBO9781107415324.004.\nJerry A Fodor and Ernest Lepore. The Compositionality Papers. Oxford University Press, 2002\nDaniel C. Hyde. Two Systems of Non-Symbolic Numerical Cognition. Frontiers in Human Neuro-\nscience, 5(November): 1\u20148. 2011. ISSN 1662-5161. doi: 10.3389/fnhum.2011.00150.\nDavid Poole. First-order probabilistic inference. In JJCAI International Joint Conference on Artif\n\ncial Intelligence. pp. 985\u2014991. 2003. URLihttv: //www.cs.ubc.ca/spider/poole/\nSriram Sankaranarayanan. Static Analysis for Probabilistic Programs : Inferring Whole Progra\nProperties from Finitely Many Paths. pp. 447-458, 2014. ISSN 15232867. doi: 10.1145/2462156\n2462179.\nProsenjit Bose, John Howat, and Pat Morin. Space-Efficient Data Structures, Streams, and Al-\ngorithms: Papers in Honor of J. Ian Munro on the Occasion of His 66th Birthday. chapter A\n\nHistory, pp. 133-149. Springer Berlin Heidelberg, Berlin, Heidelberg, 2013. ISBN 978-3-642-\n40273-9. doi: 10.1007/978-3-642-40273-9{\\_}10. URL|http://dx.doi.org/10.1007/\n978-3-642-40273-9{ _}10\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations ofWords\nand Phrases and their Compositionality. Arvix, 1:1-9, 2000. ISSN 0003-6951. doi: 10.1162/jmlr.\n\n2003.3.4-5.951. URL http: //www.crossref.org/deleted{_}DOI. html|\nHe Nuerk and K Willmes. On the magnitude representations of two-digit numbers.\nchology Science, 47(1):52-72, 2005.\n\nPsy-\nArmando Solar-Lezama. The sketching approach to program synthesis. Lecture Notes in Computer\nScience (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioin-\nformatics), 5904 LNCS:4\u201413, 2009. ISSN 03029743. doi: 10.1007/978-3-642-10672-9{\\ _}3.\nElizabeth S. Spelke and Katherine D. Kinzler. Core knowledge, 2007. ISSN 1363755X\nJay Spitzen and Ben Wegbreit. The verification and synthesis of data structures. Acta Informatica,\n4(2):127-144, 1975. ISSN 00015903. doi: 10.1007/BF00288745.\nSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-To-End Memory Net-\nworks. pp. 1-11, 2015. URLihttp: //arxiv.org/abs/1503.08895|"}]
S1Jhfftgx
[{"section_index": "0", "section_name": "ENFORCING CONSTRAINTS ON OUTPUTS\nWITH UNCONSTRAINED INFERENCE", "section_text": "Jay Yoon Lee\nCarnegie Mellon University\nPittsburgh, PA\nIncreasingly, practitioners apply neural networks to complex problems in natu:\nral language processing (NLP), such as syntactic parsing, that have rich output\nstructures. Many such applications require deterministic constraints on the output\nvalues; for example, requiring that the sequential outputs encode a valid tree. While\nhidden units might capture such properties, the network is not always able to\nlearn them from the training data alone, and practitioners must then resort to post\nprocessing. In this paper, we present an inference method for neural networks that\nenforces deterministic constraints on outputs without performing post-processing\nor expensive discrete search over the feasible space. Instead, for each input, we\nnudge the continuous weights until the network\u2019s unconstrained inference proce:\ndure generates an output that satisfies the constraints. We find that our method\nreduces the number of violating outputs by up to 81%, while improving accuracy"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many neural networks have discrete-valued output units that correspond to an inference or predictior\nabout an input. Often, a problem might involve multiple discrete outputs. Unlike multiclass classi\nfication, which associates a single discrete output with each input, so called structured predictior\nproblems associate multiple outputs with each input. For example, in multi-label classification\ninstead of predicting a single relevant class pertaining to the image or sentence, we must predict al\nrelevant classes: the image contains a dog, a tree, and a sky. In sequence prediction problems, the\ndiscrete outputs might be a sequence of words or symbols that must form a coherent translation of <\nsource language sentence (Cho et al|| 2 description of an image (Vinyal\n, answer to a question (Kumar et al.|/2016), or a parse-tree for an input sentence (Viny:\n). Crucially, in structured prediction, the output values are interdependent. Even thougt\nnetworks usually predict outputs independently or sequentially (one output at a time), the\nhidden units allow them to successfully capture many dependencies.\nAs a motivating example, consider a sequence-to-sequence network that inputs a sentence and outputs\na sequence of \u201cshift-reduce\u201d commands that describe the sentence\u2019s parse tree. Briefly, the shift-\nMichael Wick, Jean-Baptiste Tristan\nmichael -wick, jean.baptiste.tristan}@oracle.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Sometimes, the outputs must obey hard constraints. For example, in sequence labeling with BILOU\ncoding, a \u2018begin\u2019 marker B cannot immediately follow an \u2018inside\u2019 marker I (Ratinov & Roth\n2009). In clustering, pairwise binary decisions must obey transitivity so that they yield a valic\n\n*quivalence class relation over the data points (McCallum & Wellner|/2005) 2006} /2008}\na valid parse tree\n\nn syntactic/dependency parsing, the output sequence must encode (McDonalc\na Pereira] 2006} [Vinyals et al] {2016}. In formal language generation or neura\n-ompilers the output must belong to a context free language or compile (Reed & de Freitas] |2016).\njJual decomposition approaches to joint inference, copies of variables must satisfy equality constraints\nKoo et al} 2010} Rush et al.| 2010} Rush & Collins| 2012). Finally, in some ensemble methods\nhe outputs of multiple conditionally independent classifiers must reach a consensus on the outpu'\nlass. Indeed, there are a tremendous number of problems that require hard constraints on the outputs\nJnlike softer dependencies, violating a hard-constraint is often unacceptable because the output o!\n\nhe network would not \u201ctype-check\u201d causing problems for downstream components. Unfortunately\nn practice. networks are not always able to exactly learn constraints from the training data alone.\nreduce commands control a parsing algorithm by indicating how and when to use its stack. Eack\ncommand controls whether to shift (s) a token onto the stack, reduce (r) the top of the stack into <\nparent tree node, or push (!) the current reduction back onto the stack.\nTo be successful, the network must generate commands that imply a valid tree over the entire input\nsentence. However, the decoder outputs just a single command at a time, producing some outputs\nthat are not globally-consistent, valid shift-reduce programs. Indeed, the output may not have enough\nshifts to include every input token in the tree or may attempt to reduce when the stack is empty. For\nexample, the following input sentence \u201c So it \u2019s a very mixed bag . \u201d comprises ten space-delimited\ntokens (the quotations are part of the input), but our unconstrained sequence-to-sequence network\noutputs an invalid sequence with only nine shifts ssr!sr!ssssrrr!rr!ssrrrrrr!. We must\nintroduce another shift so the last token is pushed onto the stack and issue another reduce so it\nis inserted into the tree.\nVe could attempt to fix the output with post-processing, but where is the right place to inse\nhese commands in the sequence? There are 406 = choose(29, 2) candidate locations. Furth\nomplicating our post-processing dilemma is the fact that the output contains several other errot\nhat are seemingly unrelated to the constraint. Instead, we could attempt to fix the problem with\nnore sophisticated decoder, but this is difficult because the decoder outputs a single character at eac\nime-step and our constraints are global, limiting corrections to the end of the sequence when it is to\nate to rectify an earlier decision. A beam search is less myopic, but in practice most of the network\nutput mass is peaked on the best output token, resulting in little improvement.\nIn this paper, we propose an inference method for neural networks that enforces output constraint\nwithout employing combinatorial discrete search. The idea is to modify some (or all) of the weight\nfor each instance at test-time, iteratively nudging them, until the network\u2019s efficient unconstraine:\ninference procedure produces a valid output. We achieve this by expressing the hard constraints a\nan optimization problem over the continuous weights and employ back-propagation to change then\nPrima facie, back-propagation is doomed because the constraint loss is necessarily a function of th\nargmax that produced the discrete values. However, we circumvent this problem by optimizing ove\nthe energy of the violating outputs instead. Since the weights directly determine the output throug!\nthe energy, we are able to manipulate the unconstrained inference procedure to produce the desire:\nresult. Much like scoped-learning, the algorithm customizes the weights for each example at test-tim\n(Blei et al.| , but does so in a way to satisfy the constraints.\nWhen applied to the above example, our method removes enough energy mass from the invalid output\nspace in only twelve steps, allowing unconstrained decoding to produce a valid output sequence:"}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "Consider a neural network that generates a variable length output vector y = {yi ft\u201d from a variable\nlength input vector x = {2;}{\"*. For example, in image classification, the input vector encodes fixed\nmulti-dimensional tensor of pixel intensities and the output vector comprises just a single element\ncorresponding to the discrete class label. In sequence-to-sequence, the input might be a variable\nlength vector of French tokens, and the output would be a variable length vector of its English\n\ntranslation. It is sometimes convenient to think of the network as a function from input to output\nf(xW)woy\nInterestingly, the network generates an additional s command at the beginning of the sequence while\nalso producing a cascade of error correction in later time steps: the new output now satisfies the\nconstraints and is a perfectly correct parse. Of course, enforcing constraints does not always lead to\nan improvement in accuracy, but we find that often it does in practice, especially for a well-trained\nnetwork. We find that our method is able to completely satisfy constraints in up to 81% of the outputs\nHowever, for the purpose of exposition, we separate the neural network into a real-valued model\n(negative energy function) that scores the compatibility of the outputs (given the weights and input)\nand an inference procedure that searches for high scoring outputs.\nThen, inference is the problem of finding the values of the outputs y that maximize the negative\nenergy given fixed inputs x and weights W. Thus, we can rewrite the neural network as the function:\nThe purpose of separating the model from the inference procedure is so we can later formalize out\noptimization problem. We emphasize that this formulation is consistent with existing neural networks\nIndeed, inference in feed-forward networks is a single feed-forward pass from inputs to outputs\nWhen the outputs only depend on each other through hidden states that only depend on earlier layer:\nof the network, feed-forward inference is exact in the sense that it finds the optimum of Equation\nFor recurrent neural networks (RNNs), each output depends on hidden states that are functions o!\nprevious output values. However, we can still think of the usual procedure that produces the highes'\nscoring output at each time step as a local greedy approximation to global inference; of course, the\nprocedure can optionally be improved with a beam.\nA major advantage of neural networks is that once trained, inference is extremely efficient. Howevet\nconstraints can render inference intractable due to discrete search. Our goal is take advantage of the\nfact that unconstrained inference is inexpensive and design a constrained inference algorithm tha\nexploits such a procedure as a black box. Our method iteratively adjusts the weights for each test-tim\u00ab\ninput, concentrating the probability mass on the feasible region so that unconstrained inferenc\u00ab\nbecomes increasingly likely to generate an output that satisfies the constraints.\nConsider the following constrained inference problem for neural networks\nmax W(x,y,W\ny\nye l*\nFor the model, let y; be a discrete output from an output unit and let w(y;;x, W) be its corresponding\nreal-valued log-space activation score (e.g., the log of the softmax for locally normalized models or\nsimply a linear activation value for globally normalized models). Define the negative energy W over a\ncollection of output values y as an exponentiated sum of log-space activation scores\nV(y;x, W) = exp (= W(yirX, ))\nf(x;W) + argmax U(y;x, W)\ny\nIn this work, we focus on constraints that require the outputs to belong to an input-dependent context-\nfree language \u00a3* (CFL). The idea is to treat the output space of the neural network as the terminal\nsymbols, and devise the appropriate production rules and non-terminals to express constraints on\nthem. An advantage of employing CFLs over other formalisms such as first order logic (FOL) is\nthat CFLs are intuitive for expressing constraints on the outputs, especially for language models and\nsequence-to-sequence networks. For example, when modeling Python or Java code, it is easy to\nexpress many of the desired programming language\u2019s constraints using a CFL, but cumbersome in\nFOL. Indeed, CFLs are an expressive class of languages.\nTo motivate our algorithm, we begin with the ideal optimization problem and argue that unlike\nfor linear models with local constraints, the resulting Lagrangian is not well suited for globally\nconstrained inference in neural networks. We ultimately settle on an alternative objective function that\nreasonably models our constrained inference problem. Although our algorithm lacks the theoretical\nsuarantees enjoyed by classic relaxation algorithms we nevertheless find it works well in practice.\nNaively enforcing the constraint requires combinatorial discrete search, which is intractable in general.\nInstead, we prefer a smooth optimization problem with meaningful gradients to guide the search.\nWith this in mind, let g(y, \u00a3) + r for r \u20ac R, be a function that measures a loss between a sentence\ny and a grammar \u00a3 such that g(y, \u00a3) = 0 if and only if there are no grammatical errors in y. That is,\ng(y, \u00a3) = 0 for the feasible region and is strictly positive everywhere else. For a large class of CFLs,\n\ng could be the least errors count function or a weighted version thereof. We could then\nexpress CFL membership as an equality constraint and minimize the Lagrangian\nmin max W(x, y,W) + Ag(y, \u00a3)\ny\nHowever, this dual optimization problem has a major flaw. Our constraints are global and do not\nnecessarily factorize over the individual outputs. Consequently, there is just a single dual variable\nA. Optimizing does little more than eliminate a single contour of output configurations at a time\n\nresulting in a brute-force trial and error search.\nInstead, observe that the network\u2019s weights control the negative energy of the output configuration:\nBy properly adjusting the weights, we can affect the outcome of inference by removing mass fron\ninvalid outputs. The weights are likely to generalize much better than the single dual variable becaus:\nin most neural networks, the weights are tied across space (e.g., CNNs) or time (e.g., RNNs). As ;\nresult, lowering the negative energy for a single invalid output has the effect of lowering the negativ\nenergy for an entire family of invalid outputs, enabling faster search. With this in mind, we introduc:\nan independent copy W) of the network\u2019s weights W and minimize with respect to these \u201cdua\nweights\u201d instead of the dual variable. This is powerful because we have effectively introduced a1\nexponential number of \u201cdual variables\u201d (via the energy, which scores each output) that we can easil\ncontrol via the weights; although similar, the new optimization is no longer equivalent to the original\nmin max W(x, y,W) + (x,y, Wa)g(y, \u00a3)\n\u00bb y\nWhile a step in the right direction, the objective still requires combinatorial search because (1) th\nmaximization involves two non-linear neural networks and (2) a greedy decoding algorithm is unab!\nto cope with the global loss g() because the constraints do not factorize over the individual output\nIn contrast the functions involved in classic Lagrangian relaxation methods for NLP have multiplies\nfor each output variable that can be combined with linear models to form a single unified decodin\nproblem for which efficient inference exists Rush & Collin\n2012). Since our non-linear functions and global constraints do not afford us the same ability, w\nmust modify the optimization problem for a final time so that we can employ the network\u2019s efficier\ninference procedure as a black-box. In particular, we (1) remove the negative-energy term th:\ninvolves the original weights W and compensate with a regularizer that attempts to keep the duc\nweights W) as close to these weights as possible and (2) maximize exclusively over the networ\nparameterized by W). The result is a different optimization problem on which our algorithm is base\nmin (wx y, Wy)gly, \u00a3\u00b0) + al|W \u2014 Wyl|l2 | y = argmax U(x, y, m))\nA y\nAlgorithm 1 Constrained inference for neural nets\nConsider the structured prediction problem of syntactic parsing in which the goal is to input a sentence\ncomprising a sequence of tokens and output a tree describing the grammatical parse of the sentence\nOne way to model the problem with neural networks is to linearize the representation of the parse\ntree and then employ the familiar sequence-to-sequence model (Vinyals et al.||2015a).\nLet us suppose we linearize the tree using a sequence of shift (s) and reduce (r, r!) commands that\ncontrol an implicit shift reduce parser. Intuitively, these commands describe the exact instructions for\nconverting the input sentence into a complete parse tree: the interpretation of the symbol s is that we\nInformally, our algorithm alternates the maximization (by running efficient unconstrained inference)\nand minimization (by performing SGD) until it produces a feasible output or it exceeds a maximum\nnumber of iterations. For each test-example, we re-initialize the dual weights to the trained weights to\nensure the network does not deviate too far from the trained network. More precisely see Algorithm|1|\nee OO I I EID IIE IID IEE OEE\n\nInputs: test instance x, input specific CFL \u00a3*, pretrained weights W\nWy + W #reset instance-specific weights\nwhile not converged do\ny + f(x; Wy) #perform inference using weights W)\nVo ows U(x, y, Wy)g(y, \u00a3*) +.a||W \u2014 Wall2 #compute constraint loss\nW) \u00ab+ W) \u20147V #update instance-specific weights with SGD or a variant thereof\nend while\nshift an input token onto the stack and the interpretation of the symbol r is that we start (or continue)\nreducing (popping) the top elements of the stack, the interpretation of a third symbol ! is that we stop\nreducing and push the reduced result back onto the stack. Thus, given an input sentence and an output\nsequence of shift-reduce commands, we can deterministically recover the tree by simulating a shift\nreduce parser. For example, the sequence ssrr!ssr!rr!rr! encodes a type-free version of the\nparse tree (S (NP the ball) (VP is (NP red))) for the input sentence \u201cthe ball is red\u201d\nIt is easy to recover the tree structure from the input sentence and the output commands by simulating\na shift reduce parser, performing one command at a time as prescribed by the classic algorithm.\nWe can express most of these constraints with a CFL\nRBDRDDA\n\n> sRr'!\n> sRr\n\u2014 Rr!\n\n\u2014+ RR\n\nSe\nIntuitively, Rule 1 states that a valid shift-reduce command set must begin with a shift (since stack is\ninitially empty, there is nothing to reduce) and end with a reduce that places the final result on the\nstack. Rule 2 states that if we do a shift, then we need to reduce the shifted token at some point in the\nfuture. Rule 3 states that if we do not shift then we are allowed to reduce only if we also push the\nresult on the stack. Rule 4 allows for multiple subtrees. Rule 5 is the base case.\nLY = LM (s(r!)*)\u2122\u2122 (r!)*\nThe first term measures the amount of violation due to the regular language and the second and third\nterms measure the amount of violation according to the CFL."}, {"section_index": "4", "section_name": "5 RELATED WORK", "section_text": "There has been recent work in applying neural networks to structured prediction problems. Fo\nexample, the recent structured prediction energy networks (SPENS) combines graphical models anc\nneural networks via an energy function defined over the output variables (Belanger & McCallum\n). SPENS focuses on soft constraints (via the energy function) and performs inference by\nrelaxing the binary output variables to be continuous and then backpropagating into them. In contrast\nour method focuses on hard constraints and we backpropagate into the weights rather than into the\noutputs directly. We could combine our method with SPENs to handle soft constraints; for example\nby back-propagating the output energy into the weights instead of the relaxed outputs themselves.\nThere has been recent work on applying neural networks to parsing problems that require the ability to\nhandle hard constraints. For example, by employing a sequence-to-sequence network (Vinyals et al.\n2015a) or a custom network designed for shift reduce parsing (Dyer et al.}[2016). The former requires\nNote that for output sequences to form a valid tree over the input, the sequence must satisfy a number\nof constraints. First, the number of shifts must equal the number of input tokens mx, otherwise either\nthe tree would not cover the entire input sentence or the tree would contain spurious terminal symbols.\nSecond, the parser cannot issue a reduce command if there are no items left on the stack. Third, the\nnumber of reduces must be sufficient to leave just a single item, the root node, on the stack.\nNote, however, that this grammar is for a general purpose shift-reduce language, but we need to\nconstrain the number of shifts to equal the number of input tokens mx. Since the riatrlyty is a bit\nverbose to express with production rules, we can instead write the regular language (s(r!)*)\"\u2122* (r!)*\nwhere m is the number of elements in x and intersect it with our CFL.\nRather than relying on a general purpose algorithm to compute g(y, \u00a3*) that measures the number\nof grammatical errors, we instead implement it specifically for our language. Let ct?_, (b(2)) be the\nfunction that counts the number of times proposition b(7) is true. Now, define the following loss\n2\n(y.\u00a3\") = inane =ay( et (us=r) ~ et (uy \u20ac fs \u00bb) +et(yi=r)\u2014(et(yi\u20ac {5,)))?\ntask inference weights changed (W)) | conversion rate | accuracy\nunconstrained | none 0.0% 75.6%\nconstraine all 65.2% 82.4%\nazbz constraine output only 20.9% 71.8%\nconstraine encoder only 58.2% 82.5%\nconstraine decoder only 57.4% 82.3%\nsr no types unconstrained none 0.0% 84.0%\nconstraine all 81.8% 84.4%\nunconstrained | none 0.0% 87.8%\nconstraine all 79.2% 88.3%\nconstraine output only 5.0% 88.1%\nsx with types | constraine: decoder (top layer) 36.2% 88.2%\nconstraine decoder (all layers) 54.7% 88.3%\nconstraine: decoder (top) + attention 38.0% 88.1%\nconstraine: decoder (all) + attention 56.5% 88.2%\nTable 1: Conversion rates on all three tasks with 100 steps of SGD. Note that satisfying the constraints\nhas no negative affect on accuracy and often has a positive affect.\nbzazbzazbzazazbzbzbzbzbz \u2014> zbaaazbaaazbaaaaaazbzbzbzbzb\niteration | output loss accuracy\n0 | zbaaazbaaazbaaaaaazbzbzbaaazbzb | 0.260 75.0\n\n39 | zbaaazbaaazbaaaaaazbzbzbaaazbzb | 0.259 75.0\n\n40 | zbaaazbaaazbaaaaaazbzbzbaaazb 0.250 80.0\n\n72 | zbaaazbaaazbaaaaaazbzbzbaaazb 0.249 80.0\n\n73 | zbaaazbaaazbaaaaaazbzbzbzbzb 0.0 100.0\nAnother intriguing approach is to distill the hard constraints into the weights at training time using a\nteacher network ( . The method is appealing because it does not require constrained\ninference or combinatorial search. However, the method must achieve a difficult balance between the\nloss due to the training data and the loss due to the constraint violations. Further, it would crucially\nrely on network\u2019s ability to generalize the constraints learned on the training data to the testing data\nFinally, our method highly resembles dual Bio Ruse a and more generally roel relaxation\nfor structured prediction (Koo et al] 2079} Rush et al} POTO 2010} [Rush & Collins] 2 In such\ntechniques, it is assumed that a computationally efficient inference al Re ee im can maximize over\na superset of the feasible region (indeed this assumption parallels our exploitation of the fact that\nunconstrained inference in the neural network is efficient). Then, the method employs gradient\ndescent to gradually concentrate this superset onto the feasible region until the constraints are\nsatisfied. However, for computational reasons, these techniques assume that the constraints factorize\nover the output and that the functions are linear so that they can be combined into a single model. In\ncontrast, we have a single dual variable so we instead minimize with respect to the weights, which\ngeneralize better over the output. Further, we are unable to combine the dual into a single model over\nwhich we can do inference because the network is highly non-linear.\nTable 2: An example for which enforcing the constraints improves accuracy. Red indicates errors.\nThe output changes more than once before the constraints are finally enforced. Greedy decoding with\nconstraints might correct this example because the spurious a\u2019s are at the end of the sequence.\nthe output to form a valid parse tree and hence they employ post-processing to ensure this property.\nThe latter satisfies constraints as part of the decoding process by sampling over a combinatorial space.\nOur approach does not rely on post processing or discrete search.\nIn this section we empirically evaluate our constrained inference procedure on two sequence-to-\nsequence tasks. The first is a transduction task between two simple languages, which we describe\nnext. The second is the sequence-to-sequence shift-reduce parsing task described in Section|4]\nazazbzazbzbzazbzbzbzbzbz \u2014 aaaaaazbaaazbzbaaazbzbzbzbzb\niteration | output loss accuracy\n0 | aaaaaazbaaazbaaazbzbzbzbaaazb | 0.2472 66.7\n1 |) aaaaaazbaaazbaaazbzbzbzbaaazb | 0.2467 66.7\n2 | aaaaaazbaaazbaaazbzbzbzbaaazb | 0.2462 66.7\n3 | aaaaaazbaaazbzbaaazbzbzbzbzb 0.0 100.0\nTable 3: An example for which enforcing the constraints improves accuracy. Red indicates errors\nNote that greedy decoding with constraints would not fix the errors in the middle since errors are\nmade before constraints are violated. In contrast, the proposed method takes the constraints into\naccount in a globall manner, allowing earlier errors to be corrected by future constraint violations.\nbzbzbzbzazbzbzazazazazbz\u2014 zbzbzbzbaaazbzbaaaaaaaaaaaazb\niteration | output loss accuracy\n0 | zbzbzbzbaaazbaaaaaaaaaaaazbaaa | 0.2954 74.2\n4 | zbzbzbzbzbaaaaaaaaazbzbaaaaaa 0.0 60.0\nTable 4: An example for which enforcing the constraints degrades accuracy. Errors in red\nA transducer T : \u00a3; \u2014 \u00a32 is a function from a source language to a target language. For the purpose\nof the experiments T' is known and our goal is to learn it from data. We choose a transducer simila\nto those studied in recent work (Geese etal 2015) The source language Lo is (az|bz) *\nand the target language \u00a3, is (aaa]zb) *. The transducer is defined to map az to aaa and bz te\nzb. For example, T(bzazbz)++zbaaazb. The training set comprises 1934 sequences of length\n\n2-20 and the test set contain sentences of lengths 21-24. As is common practice, we employ shorte:\nsentences for training to require generalization to longer sentences at test time.\nWe employ a thirty-two hidden unit single-layered, attentionless, sequence-to-sequence long shor\nterm memory (LSTM) in which the decoder LSTM inputs the final encoder state at each time-step. Th\nencoder and decoder LSTMs each have their own set of weights. We train the network for 1000 epoch\nusing RMSProp to maximize the likelihood of the output (decoder) sequences in the training set. Th\nnetwork achieves perfect train accuracy while learning the rules of the output grammar nearly perfectl;\neven on the test-set. However, despite learning the train-set perfectly, the network fails to learn th\ninput-specific constraint that the number of a\u2019s in the output should be three times as the number it\nthe input. We implement a loss for this constraint and evaluate how well our method enforces thi\nat test-time: g(y, \u00a37) = (n +:m)7! ((3 Y\u00bb, [ei = a) - (Sy, I(yi = a))\nThe top section of Table [I]contains the results for this azbz task. We use the term converted to refe:\nto a sentence that initially had a constraint-violation, but was later fixed by the constrained-inference\nprocedure. The conversion rate is the percentage of such sentences that we convert: on this task, uf\nto two-thirds. We experiment with which subset of the weights is best for satisfying the constraints\nfinding that it is best to modify them all. We also report accuracy to study an initial concern\nSpecifically, we had to omit the negative energy of the original weights W from our optimizatior\nproblem, Equation[7] potentially allowing the network to find a set of dual weights W) that happer\nto satisfy the constraints, but that have poor performance. However, we found this not to be the case\nIn fact, we report the token-wise accuracy over the examples for which the unconstrained neural\nnetwork violated constraints and find that on the contrary, accuracy improves. Further, we find the\nregularizer is unnecessary since the initialization W) = W ensures the network never drifts too far.\nIn order to gain a better understanding of the algorithm\u2019s behavior, we provide data-cases that\nhighlight both success and failure (Tables[2[3]4). The title of these tables is the input and the desired\nground truth output. The rows of the table show the network\u2019s output at each iteration (as indicated)\nThe loss column is the constraint loss weighted by the output\u2019s energy U(x, y, W))g(y, Lx), and\nthe final column is the token-wise accuracy between the output and the ground truth.\nNR ~ o ied Se af tt NS \u201c\n\nn-+m, the combined intput/output length, normalizes between 0 and 1. For constrained inference we\nrun Algorithm[Tand employ vanilla stochastic gradient descent with a learning rate of 0.05 and no\nweight decay. We cap the number of iterations at a maximum of 100.\n(\u201c So it \u2019s a very mixed bag. \u201d) \u2014> sssr!ssssrr!srrr!rr!ssrrrrrr!\niteration\n\noutput\n\nloss accuracy\n0 | ssr!sr!ssssrrr!rr!ssrrrrrr! 0.0857 33.3%\n11 | ssr!sr!ssssrrr!rr!ssrrrrrr! 0.0855 33.3%\n12 | sssr!ssssrr!srrr!rr!ssrrrrrr! | 0.0000 100.0%\nTable 5: A shift-reduce example for which the method successfully enforces constraints. The initial\noutput has only nine shifts, but there are ten tokens in the input. Enforcing the constraint not only\ncorrects the number of shifts to ten, but changes the implied tree structure to the correct tree.\nTable [2]contains an example for which our method successfully satisfies the constraints resulting\nin perfect accuracy. However, because the constraint violation appears at the end of the string, a\ngreedy decoder that opportunistically enforces constraints on the fly could potentially correct this\nerror. In Table|3/we show a more interesting example for which such a greedy decoder would not\nbe as successful. In particular, the unconstrained network outputs the final aaa too early in the\nsequence, but the constraint that controls the number of a\u2019s in the output is not violated until the\nend of the sequence. In contrast, our method takes the constraint into account globally, allowing\nthe network to not only rectify the constraint, but to achieve perfect accuracy on the sentence\n(in just four gradient updates). Finally, in Table [4] we show an example for which enforcing the\nconstraints hurts the accuracy. The updates causes the network to erroneously change outputs that\nwere actually correct. This can happen if (a) the underlying network is sometimes inaccurate in\nits output or confidence/probabilities thereon or (b) the gradient steps are too large causing the\nnetwork to completely leapfrog over the correct solution in a single step. The latter can be avoided by\nnormalizing the constraint loss so it does not grow unbounded with the number of outputs and by\nerring on the side of a smaller learning rate.\nWe repeat the same experiment (middle section of Table[I), but on the shift-reduce parsing task\ndescribed in Section[4] We convert the Wall Street Journal portion of the Penn Tree Bank (PTB) into\nshift-reduce commands and randomly split into 30k train and 9.2k test examples. We increase the\nnumber of hidden units to sixty-four to accommodate the larger input space (50k words) and employ\nEquation [10] (normalized by sequence length) for the constraint loss. We measure the sequence\naligned token accuracy. Otherwise, we employ the exact same experimental parameters as the azbz\ntask, both for training the LSTM and for our algorithm. We find that our algorithm performs even\nbetter on the real-world task, converting over 80% of the violated outputs. We again find that ou\nprocedure has no negative impact on accuracy, which in fact improves, but not as substantially as fot\nthe azbz task. Table[5]|contains a successful example that we had previously highlighted in Section[]\nThe algorithm satisfies the constraints, and also corrects the remaining output errors."}, {"section_index": "5", "section_name": "7 CONCLUSION", "section_text": "We presented an algorithm for satisfying constraints in neural networks that avoids combinatoriz\nsearch, but employs the network\u2019s efficient unconstrained procedure as a black box. We evaluate!\nthe algorithm on two sequence to sequence tasks, a toy transducer problem and a real-world shif\nreduce parsing problem. We found that the method was able to completely rectify up to 80% o\nviolated outputs when capping the number of iterations at 100. Often, enforcing constraints cause\nthe accuracy to improve, dispelling initial concerns that adjusting the weights at test-time woul\nbe treacherous. Our method currently lacks the same theoretical guarantees as classic Lagrangia:\nrelaxation methods, so in future work we want to focus on supplemental theory and additiona\nobjective functions. We also hope to extend the work to handle soft constraints, for example, a\nimposed by an external language model.\n\u2018inally, we conduct a version of the shift-reduce experiment that includes the phrase types (e.g.\n10un-phrase (NP)). To accommodate the larger output space (output alphabet size increases to 479)\nve employ a larger network with 128 hidden units, attention and three-layers. Note that even this\nnore sophisticated network fails to learn the constraints from data and adding layers does not help\nThe larger network affords us the opportunity to experiment with modifying different subsets of\nveights for enforcing constraints. As seen in the last section of Table[]] modifying all the weight:\nvorks best, converting 79.2% of the violating sentences; again without negatively affecting accuracy"}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "David M. Blei, Andrew Bagnell, and Andrew K. McCallum. Learning with scope, with application\nto information extraction and classification. In Uncertainty in Artificial Intelligence (UAI), 2002.\nZhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric P. Xing. Harnessing deep neura\nnetworks with logical rules. In Association for Computational Linguistics (ACL), 2016.\nAnkit Kumar, Ozan Irsoy, Peter Ondruska, Mohit lyyer, James Bradbury, Ishaan Gulrajani, Victo\nZhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks fo\nnatural language processing. Machine Learning, pp. 1378-1387, 2016.\nGordon Lyon. Syntax-directed least-errors anallysis for context-free languages: A practical approach.\nProgramming Languages, 17(1), January 1974.\nAndrew McCallum and Ben Wellner. Conditional models of identity uncertainty with applications tc\nnoun coreference. In Neural Information Processing Systems (NIPS), 2005.\nRyan McDonald and Fernando Pereira. Learning of approximate dependency parsing algorithms. In\nEACL, 2006.\nAlexander M. Rush and Michael Collins. A tutorial on dual decomposition and lagrangian relaxatior\nfor inference in natural language processing. Journal of Artificial Intelligence Research, 45\n305-362, 2012.\nEdward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to\ntransduce with unbounded memory. In Neural Information Processing Systems (NIPS), 2015.\n[erry Koo, Alexander M Rush, Michael Collins, Tommi Jaakkola, and David Sontag. Dual decompo\nsition for parsing with non-projective head automata. In Proceedings of the 2010 Conference o1\nEmpirical Methods in Natural Language Processing, pp. 1288-1298. Association for Computa\ntional Linguistics, 2010.\nIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks.\nIn Neural Information Processing Systems (NIPS), 2014.\nMichael Wick, Aron Culotta, and Andrew McCallum. Learning field compatibilities to extrac\ndatabase records from unstructured text. In Proceedings of the 2006 Conference on Empirica\nMethods in Natural Language Processing, EMNLP \u201906, pp. 603-611, Stroudsburg, PA, USA, 2006\u20ac\nAssociation for Computational Linguistics. ISBN 1-932432-73-6."}]
BJ46w6Ule
[{"section_index": "0", "section_name": "DYNAMIC PARTITION MODELS", "section_text": "Marc Goessling"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "We consider the task of learning a compact binary representation (e.g. 2015)\nThat means we are seeking a parsimonious set of experts, which can explain a given collection o:\nmultivariate data points. In contrast to most existing approaches the emphasis here is on finding\nexperts that are individually meaningful and that have disjoint responsibilities. Ideally, each exper\nexplains only one factor of variation in the data and for each factor of variation there is exactly one\nexpert that focuses on it.\nWe start by describing a simple model family, which forms the basis of our work. A partition model\n(Hartigan, }) makes use of a manually specified partitioning of the D variables into subsets\nThe model is completed by specifying a prior distribution P(h) for the latent state h. One advantage\nof partition models is that estimating P, from observations is straightforward, while learning exper\nmodels in general requires computationally involved procedures (Bengio et al], 2013). However, it\norder to be able to define a satisfactory partitioning of the variables some prior knowledge abou\nthe dependence structure is needed. For image data a common choice is to use a regular grid tha\ndivides the image into patches (e.g. Pal ef al], 2002). In general, a good partitioning is characterizec\nby providing weakly dependent subsets of variables so that the conditional independence assumptior\n() is reasonable and the distribution of the latent variables is easy to model. Unfortunately, ofter\nthere simply is no single fixed partitioning that works well for the whole dataset because the se\namit@galton.uchicago.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We present a new approach for learning compact and intuitive distributed rep-\nresentations with binary encoding. Rather than summing up expert votes as in\nproducts of experts, we employ for each variable the opinion of the most reliable\nexpert. Data points are hence explained through a partitioning of the variables\ninto expert supports. The partitions are dynamically adapted based on which ex-\nperts are active. During the learning phase we adopt a smoothed version of this\nmodel that uses separate mixtures for each data dimension. In our experiments we\nachieve accurate reconstructions of high-dimensional data points with at most a\ndozen experts.\nFormally, the experts P;,, k = 1,...,4, are probability distributions that depend on binary latent\nvariables h(k). The latent state h specifies which experts are active and has to be inferred for each\nD-dimensional data point x. The active experts then define a probability distribution P. The goal\nof representation learning is to train experts such that the conditional likelihood P(a | h) of the data\ngiven the latent activations is maximized.\nFor each subset of variables x(S~) = (a(d))aes, there exists a separate model Py. It is then typically\nassumed that variables in different subsets are conditionally independent, i.e.,\n(e|h)= [Le (Se) | h(\u00e9)\nof variables, which are affected by different factors of variation, might overlap. This restricts th\nscenarios in which partition models are useful.\nD\nP(x |h) =] ] Paw (@(d)), \u2014-&*(d) = argmax eg (d).\ndei k:h(k)=1\nThat means, each variable a(d) is explained by only a single expert k*(d). The partitioning intc\nexpert supports S,(h) = {d \u20ac {1,...,D} : k*(d) = k} is determined dynamically based on the\nlatent configuration h. We hence call our model a dynamic partition model."}, {"section_index": "3", "section_name": "2.1 INFERENCE", "section_text": "In the inference step we try to find for each data point x, the subset of experts {k : hn(k) = 1} tha\nmaximizes P(a,, | hn). To do this, we suggest to sequentially activate the expert that most improve:\nthe likelihood, until the likelihood cannot be improved anymore. This approach is called likelihooc\nmatching pursuit 2015). The greedy search works well for our model becaus\u00ab\nwe are working with a small set of experts and each expert focuses on a rather different structure\nin the data. Consequently, the posterior distribution on the latent variables given x, is often highly\npeaked at a state hy (note that for high-dimensional data the effect of the prior P(A) is typically;\nnegligible)."}, {"section_index": "4", "section_name": "2.2 LEARNING", "section_text": "Se @=Ken(a)\nby (d) = SS N :\nSS 1ky(d)=*}\n\nn=1\nHere, k*(d) denotes the expert with the highest level of expertise e;,(d) among all experts k wit\nhn(k) =1."}, {"section_index": "5", "section_name": "2.2.1 EXPERTISE-WEIGHTED COMPOSITION", "section_text": "In order to compute the estimator in (Q) the levels of expertise e, have to be known. Since in this\npaper we are trying to train the experts as well as the associated levels of expertise we consider a\nsmoothing of the maximum-expertise composition (Q) to motivate our learning procedure. Rathet\nthan using the expert with the highest level of expertise, we form a mixture of the active experts.\nwhere the mixture weight is proportional to the level of expertise. Thus, the smoothed composition\nIn this paper we extend partition models to allow for dynamically adapting partitionings. In Section\nwe introduce the model and present an appropriate learning procedure. Related work is discussed\nin Section Bl Special emphasis is given to the comparison with products of experts (Hinton, 2002).\nExperiments on binary and real-valued data are performed in Section While it is important to\nexplain high-dimensional data points through multiple experts, our work shows that it is possible\nto assign the responsibility for individual variables to a single expert (rather than having all active\nexperts speak for every variable).\nOur main proposal is to define for each expert P; its level of expertise ex \u20ac R? for all variables.\nWe can then dynamically partition the variables based on the active experts. Specifically, for each\nvariable we employ the most reliable (active) expert\nIn contrast to traditional approaches, which combine multiple experts for individual variables, train-\ning the experts in a dynamic partition model is trivial. Indeed, the maximum-likelihood estimates\nare simply the empirical averages over all observations for which the expert was responsible. For\nexample, the expert means can be estimated from training data x,,.n=1..... N,as\nD\n\nP(w|h) =] So re(a)Pr(w(d)), \u2014re(d) =\n\nd=1 k=1\n\nen(d) i =\nklih(k!)=1 &nt (2) if h(k) 1 ;\n) ifh(k) =0\nIn contrast to classical mixture models (e.g. McLachlan & Peel, 2004) we use different mixture\nweights for each dimension d \u20ac {1,..., D}. The mixture weight rz (d) is the degree of responsibil-\nity of k-th expert for the d-th dimension and depends on the latent state h. An expert with a medium\nlevel of expertise assumes full responsibility if no other reliable expert is present and takes on a low\ndegree of responsibility if experts with a higher level of expertise are present.\nV[P] = Ey, [V[Px]] + V>, [E[Px]]"}, {"section_index": "6", "section_name": "2.2.2 EXPERT UPDATE", "section_text": "The sequential inference procedure (from Section {Z.]]) provides for each data point x, the latent rep\nresentation h,,. We denote the corresponding expert responsibilities (using the current estimates fo\nthe level of expertise) by 7,%. The smooth analog to the hard update equation (G) is a responsibility\nweighted average of the training samples\nvg(d)\n\niM=z\n\nTre (d)(@n(d) \u2014 H(A)? + evo\n\nTnk(d) + \u20ac\n\niMez\nwhere Vg is the empirical variance of all training samples."}, {"section_index": "7", "section_name": "2.2.3. EXPERTISE UPDATE", "section_text": "We now turn to the updates of the levels of expertise. The log-likelihood of the smoothed mode\n@) as a function of e; is rather complex. Using gradient descent is thus problematic because th\nderivatives with respect to ex, can have very different scales, which makes it difficult to choose ai\nappropriate learning rate and hence the convergence could be slow. However, exact optimization i\nnot necessary because in the end only the order of the levels of expertise matters. Consequently, w\npropose to adjust ex,(d) only based on the sign of the gradient. We simply multiply or divide thi\ncurrent value by a constant C\u2019.. If the gradient is very close to 0 we leave e,(d) unchanged. For al\nour experiments we used C' = 2. Larger values can speed up the convergence but sometimes lead t\na worse solution. Using an exponential decay is common practice when learning levels of expertis\n(e.g. [Herbster & Warmuth, 998).\nIn the learning procedure we perform the expertise update first. We then recompute the responsibil-\nities using these new levels of expertise and update the experts. Our algorithm typically converges\nafter about 10 iterations.\nthe variance of a mixture is always larger than the smallest variance of its components. In other\nwords, the precision of the smoothed model is maximized when all the mixture weight (individually\nfor each dimension) is concentrated on the most precise expert. We can thus learn a dynamic parti-\n\ntion model in an EM manner (Dempster et al], [[977) by interleaving inference steps with updates of\nthe experts and levels of expertise in the smoothed model.\nFor stability we added a term that shrinks the updated templates towards some target 49 if the total\nresponsibility of the expert is small. In our experiments we set j#o to the average of all training\nexamples. The update rule implies that the experts have local supports, in the sense that they are\nuninformative about variables for which they are not responsible.\nFor binary data the mean templates 4, are all we need. Continuous data 2 \u20ac R\u201d is modeled\nthrough Gaussians and hence we also have to specify the variance v;, of the experts. We again use a\nresponsibility-weighted average"}, {"section_index": "8", "section_name": "3 RELATED WORK", "section_text": "erbster & Warmuthl (1998) proposed an algorithm for tracking the best expert in a sequential pre-\ndiction task. In their work it is assumed that a linear ordering of the variables is known such that\nthe expert with the highest level of expertise is constant on certain segments. In contrast to that,\nour approach can be applied to an arbitrary permutation of the variables. Moreover, they consider\na single sequence of variables with a fixed partitioning into experts supports. In our setup the par-\ntitioning changes dynamically depending on the observed sample. However, the greatest difference\nto our work is that [Herbster & Warmuth (998) do not learn the individual experts but only focus on\ntraining the levels of expertise.\nLiicke & Sahanil (2008) studied a composition rule that also partitions the variables into expert\nsupports. In their model the composed template is simply the maximum of the experts templates\n[4x. This rule is only useful in special cases. A generalization, in which the composition depends\non the maximum and the minimum of the expert templates juz, (d), was considered by\n(Amit! (2015). While the motivation for that rule was similar, the maximum-expertise rule in this\npaper is more principled and can be applied to continuous data.\nIn the work by Amit & Trouvd (2007) a simple average (i-e., an equal mixture) of the individual\ntemplates was used. With such a composition rule, all experts are equally responsible for each of the\nvariables and hence specialization on local structures is not possible. To circumvent this problem,\nin their work e;,(d) was manually set to 1 for some subset of the dimensions (depending on a latent\nshift variable) and to 0 elsewhere.\nA popular model family with latent binary representation are products of experts (Hinton), 2002). In\nsuch a model the individual distributions P;, are multiplied together and renormalized. Computation\nof the normalizing constant is in general intractable though. A special case, in which an explicit\nnormalization is possible, are restricted Boltzmann machines (Hinton, 2002). In these models the\nexperts are product Bernoulli distributions with templates pz, \u20ac [0, 1]?. The composed distribution\nis then also a product Bernoulli distribution with composed template\nwhere the weights w;(d) = log(x(d)/(1 \u2014 ux (d)) \u20ac R are the log-odds of the experts anc\na(t) = (1 + exp(\u2014t))~ is the logistic function. This sum-of-log-odds composition rule arise:\nnaturally from generalized linear models for binary data because the log-odds are the canonica\nparameter of the Bernoulli family. In a product of experts, the variance of the composition is usual];\nsmaller than the smallest variance of the experts. As a consequence, products of experts tend t\nemploy many experts for each dimension (for more details on this issue see (Goessling & Ami\n(ZUI5)). Even with an L1-penalty on the votes w;,(d) the responsibility for individual variable:\na(d) is typically still shared among many experts. The reason for this is that under the constrain\n>, We(d) = w(d) the quantity }>,, |wx(d)| is minimized whenever w; (d) has the same sign fo:\nall k. The usual inference procedure for products of experts independently activates experts based o1\ntheir inner product with the data point. In particular, not just the most probable expert configuratiot\nis determined but the whole posterior distribution on latent states given the data is explored throug!\nMonte Carlo methods. For learning in products of experts, simple update rules like @) and (@) canno\nbe used because for each expert the effects of all other experts have to be factored out. Dynami:\npartition models essentially decompose the expert votes w, into expert opinions jz, and levels o\nexpertise e,. Apart from the computational advantages for learning, this introduces an additiona\ndegree of flexibility because the expert supports are adjusted depending on which other experts are\npresent (cf. Figure 8). Moreover, the decomposition into opinions and levels of expertise avoid:\nambiguities. For example, a vote w;,(d) + 0 could mean that ps,.(d) 1/2 or that e,(d) + 0.\nAnother common model for representation learning are autoencoders (Vincent ef all, , which\ncan be considered as mean-field approximations of restricted Boltzmann machines that use laten\nvariables h(k) with values in [0,1]. To obtain a sparse representation a penalty on the number ot\nactive experts can be added 2011). Such approaches are also known as sparse dictionaries\n(e.g., [Elad, 2010) and are based on opinion pools of the form >>, h(k)w,(d). The strength ot\nthe sparsity penalty is an additional tuning parameter which has to be tuned. In dynamic partitior\nmodels sparse activations are inherent. In the next section, we experimentally compare products o!\nexperts, autoencoders and sparse dictionaries to our proposed model.\nprem (d) = 0 (Soniye x(a) ) ;\nFigure 1: Expert training for the synthetic dataset. Each panel shows the probabilities (white/black\ncorresponds to j1x(d) = 0/1) of the 10 experts (rows) for the 10 dimensions (columns). 1st panel:\nRandom initialization. 2nd-4th panel: Our learning procedure after 3/5/15 iterations.\nFigure 2: Trained experts for the synthetic data after 1,000 iterations using an autoencoder (1st\npanel), a sparse dictionary (2nd panel) and a restricted Boltzmann machine (3rd panel)."}, {"section_index": "9", "section_name": "4.1. SYNTHETIC DATA", "section_text": "We consider a synthetic example and try to learn the underlying factors of variation. The dataset\nconsists of the 32-element subset {(0,1),(1,0)}\u00ae\u00b0 C {0,1}!\u00b0. Note that there are 5 factors of\nvariation corresponding to the state of the pairs (a(2\u00a2\u20141), w(2\u00a2)) for @ = 1,..., 5 with the two\nfactor levels (0,1) and (1,0). Indeed, the distribution can be easily expressed through a partition\nmodel with partitioning\n{1,2} U {3,4} U {5,6} U {7, 8} U 19, 10}"}, {"section_index": "10", "section_name": "and corresponding models", "section_text": "Pe(w(2l\u20141), #(20)) = 5 - 1{w(2\u00a2\u20141)=0, w(20)=1} + 5 - L{a(20\u20141)=1, \u00ab(2\u00a2)=0.\nWe show that our dynamic partition model is able to learn these factors of variation without requiring\na manual specification of the partitioning. Here, the total number of experts we need to accurately\nreconstruct all data points happens to be equal to the number of dimensions. However, in other case:\nthe number of required experts could be smaller or larger than D. We ran our learning algorithrr\nfor 15 iterations starting from a random initialization of the experts. The resulting templates afte:\n3, 5 and 15 iterations are shown in Figure []. We see that each of the final experts specializes ir\nexactly two dimensions d and d+ 1. Its opinion for these variables are close to 0 and 1, respectively\nwhile the opinions for the remaining variables are about 1/2. Every data point can now be (almost\nperfectly reconstructed by using exactly 5 of these experts.\nFor comparison we trained various other models with 10 experts, which use a sum-of-log-odd:\ncomposition. We first tried an autoencoder (Vincent et all, 2008), which in principle could adop\nthe identity map because it uses (in contrast to our model) a bias term for the observable and laten\nvariables. However, the gradient descent learning algorithm with tuned step size yielded a differen\nrepresentation (Figure Q, Ist panel). While the reconstruction errors are rather low, they are clearly\nnonzero and the factors of variations have not been disentangled. Next, we considered a dictionary\nwith a sparse representation (e.g., Elad, 2010). The sparsity penalty was adjusted so that the average\nnumber of active dictionary elements was around 5. The learning algorithm again yielded highly\ndependent experts (Figure Z, 2nd panel). Finally, we trained a restricted Boltzmann machine througt\nbatch persistent contrastive divergence (Tieleman. using a tuned learning rate. Note that z\nS$ KROL S=oBHOS\nKRAASIMMOOCAS OG\nON BON RA OED\nHaomMS\u2014NRCHwS\n~SHeoeTrTor vs r~wy\nBAT OHSHX YH GBS\nKY OWH D9 BSXRYO\nGA re Prod HHS\n%& HHH ASBOH~\nbmg ogee oon\nFigure 3: Trained experts for MNIST digits. Left: Expert probabilities (white/black corresponds tc\npx (d) = 0/1). Right: Levels of expertise (blue/red corresponds to small/large values).\nFigure 4: Reconstruction of MNIST test examples using likelihood matching pursuit. Each column\nvisualizes the composed Bernoulli templates during the sequential inference procedure (top down)\nfor one sample. The bottom row are the original data points."}, {"section_index": "11", "section_name": "4.2. MNIST DIGITS", "section_text": "We now consider the MNIST digits dataset (CeCun ef all, [[998)), which consists of 60,000 training\nsamples and 10,000 test samples of dimension 28 x 28 = 784. We ran our learning algorithm for 10\nPBBBWYB se te eEwY\nSII9I333\n\nBe rPPrPrrrrrs>\n$398995959993\ntlt Salt Dall Doll Dall ball io\nSLVsSsesssggse\nNANARNSEBENAN\nHHH AAA Hh Aw HA\nSs~o4ssgoggqgs\n(iy Mn Seo ber bn bo bn bey bn ts\ncr\u00a2rrsr eee\nMO POVMNPONOoNVON\nQE ESCSSOOSCE ES\nAFF ATANNANN\nTete oe re ee eS\n\u2014 er we ee ee ee ee\n\nme CE OCH OCH OCH TF TF Te De\nMA OW WOW Sw OS\nrestricted Boltzmann machine in principle only requires 5 experts to model the data appropriately\nbecause it uses bias terms. However, we again learned 10 experts (Figure 2, 3rd panel). While\nthe results look better than for the previous two models they are still far from optimal. In earlier\nwork (2015) we performed a quantitative comparison for a similar dataset, which\nshowed that the reconstruction performance of models with sum-of-log-odds composition is indeed\nsuboptimal.\nFigure 5: Dynamic supports for 5 MNIST experts. Left column: Expert probabilities. Remaining\ncolumns: Composed Bernoulli templates for 10 latent configurations. The cast opinion of the expert\nis shown in shades of red (white/red corresponds to 4,.(d) = 0/1).\n8\nPn eh Dn he\nDa \"ye To Doe\nFigure 6: Trained experts for Weizmann horses. Left: Expert probabilities (white/black corresponds\nto p,(d) = 0/1). Right: Levels of expertise (blue/red corresponds to small/large values).\niterations and trained 100 experts (Figure B)). We see that some experts specialize on local structures\nwhile others focus on more global ones. In Figure Hl we visualize the inference procedure for some\ntest samples using these 100 learned experts. On average 12 experts were activated for each data\npoint. For easier visualization we show at most 10 iterations of the likelihood matching pursuit\nalgorithm. The reconstructions are overall accurate and peculiarities of the samples are smoothed\nout. In Figure 5] we illustrate how the expert supports change based on the latent representation.\nDepending on which other experts are present the supports can vary quite a bit."}, {"section_index": "12", "section_name": "4.3. WEIZMANN HORSES", "section_text": "The following experiment shows that our model is able to cope with very high-dimensional data\nThe Weizmann horse dataset (Borenstein & Ullman, 2008) consists of 328 binary images of siz\n200 x 240. We used the first 300 images to train 20 experts (Figure \u00a7) and used the remaining 2!\nimages for testing. Some of the experts are responsible for the background and the central regiot\nof the horse while other experts focus on local structures like head posture, legs and tail. In Figur\n1 we illustrate the partitioning of the test examples into expert opinions. For simplicity we usec\nexactly 4 experts to reconstruct each sample. Not all characteristics of the samples are perfectl;\nreconstructed but the general pose is correctly recovered. The same dataset was used to evaluat\nthe shape Boltzmann machine (Eslami_et-al, 2014), where 2,000 experts were learned. For thos:\nexperiments the images were downsampled to 32 x 32 pixels. This is a factor 50 smaller than th\nfull resolution of 48,000 dimensions that we use.\n% Wor Ov)\n8\u00bb DP OM 99 9)\nt& MOH eg A\nMf ky rw\naKA Teo\nVacs)\n& 2 O R-O\nBg ge fy) 09\n3 SO\nmM & Go Mm\nSKY tT ~O\n% Qo @ ev)\n8\u00bb DP OM 99 9)\n& MOH\nMf ky rw\naKA Teo\nVac)\n& 2 O R-O\nBg ty) KO\n6.3 S~\nSe\nen ek se\n4 { Nd ,\n3\n\noe\nhp ocx,\nie\n\nJen BA AA YS\nFigure 7: Decomposition of the test examples from the Weizmann horse dataset. 1st column:\nOriginal data points. 2nd column: Reconstructions (shown are the composed Bernoulli templates).\n3rd-6th column: Partitioning into experts opinions (white/black corresponds to p,(d) = 0/1, gray\nindicates regions for which the expert is not responsible).\nSdn ude ede ode ae odeoetuet.\ntat Qa. ee, a et, td. xh. BE, ed\n\nent\nFigure 8: Reconstructions of the test examples from the Caltech motorcycle dataset. Odd rows:\nOriginal data. Even rows: Reconstructions (shown are the composed Gaussian means)."}, {"section_index": "13", "section_name": "4.4. CALTECH MOTORCYCLES", "section_text": "We also experimented with real-valued data using the Caltech-101 motorcycle dataset (Fei-Fei et al\n2007), which consists of 798 images of size 100 x 180. The first 750 images were used for trainin;\nand the remaining 48 images for testing. We trained 50 experts by running our learning procedur\nfor 10 iterations. In Figure 8] we visualize the reconstructed test examples. The reconstruction\nare a bit blurry since we use a fairly sparse binary representation. Indeed, for each data point ot\naverage only 7 experts were employed. Note that the shapes of the motorcycles are reconstructe\nquite accurately."}, {"section_index": "14", "section_name": "5 DISCUSSION", "section_text": "In order to improve the reconstructions for continuous image data we could use real-valued latent\nvariables in addition to binary ones (as in Hinton et al] (J )). This would allow us to model inten-\nsities and contrasts more accurately. The inference procedure would have to be adapted accordingly\nsuch that continuous activations can be returned.\nOur work focused on product distributions. In order to apply the proposed approach to models with\ndependence structure one can make use of an autoregressive decomposition (e.g., (Goessling & Amit,\n2016). If the joint distribution is written as a product of conditional distributions then we can employ\nthe same composition rule as before. Indeed, we can model composed the conditionals as\nP(a(d) | e(1:d\u20141), h) = Py\u00ab(a)(x(d) | w(1:d\u2014-1)),\nP(a(d) | x(1:d\u2014-1), hk) = Pye (ay (ad) | e(1:d\u2014-1)),\nwhere P,, are autoregressive expert models and k*(d) is the active expert with the highest level o\nexpertise for dimension d."}, {"section_index": "15", "section_name": "REFERENCES", "section_text": "Eran Borenstein and Shimon Ullman. Combined top-down/bottom-up segmentation. [EEE Trans.\nactions on Pattern Analysis and Machine Intelligence. 30(12):2109\u20142125. 2008.\nArthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data\nvia the em algorithm. Journal of the Royal Statistical Society. Series B (methodological), pp.\n1-38, 1977.\nMichael Elad. Sparse and redundant representations. Springer, 2010.\nJohn A Hartigan. Partition models. Communications in statistics-Theory and methods, 19(8):2745-\n2756, 1990.\nGeoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neura\ncomputation, 14(8):1771-\u20141800, 2002.\nGeoffrey E Hinton, Brian Sallans, and Zoubin Ghahramani. A hierarchical community of experts.\nIn Learning in graphical models, pp. 479-494. 1998.\nGeoffrey McLachlan and David Peel. Finite mixture models. John Wiley & Sons, 2004.\n\\ndrew Ng. Sparse autoencoder. CS294A Lecture Notes, 72:1-19, 2011\nPascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting anc\ncomposing robust features with denoising autoencoders. In Jnternational Conference on Machin\nLearning, pp. 1096-1103, 2008.\nYoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new\nperspectives. JEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828,\n2013.\nYann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to\ndocument recognition. Proceedings of the IEEE, 86(11):2278\u20142324, 1998."}, {"section_index": "16", "section_name": "6 DERIVATIVES", "section_text": "Flu) = rloguw+ (1 \u2014 2) log(1 \u2014 p).\nCk\n\neK!\n~Yp\n[he first and second derivative of the log-likelihood with respect to the composed probability ar\ndf \u00ab 1-a cp\n\ndu pp l= p(t 4)\u2019\nd? f x l-\u00ab (x \u2014 py\nqu? (1 = pt)? 12(1 = p)\nThe first and second derivative of the composed probability with respect to the expert probabilities\nConsequently, the derivatives of the log-likelihood with respect to the expert probabilities are\nTH\nx\n\naq dp rk win,\n\n7 : dir\n\nhn du\n\ndr\naf &f (due\\ df @u_ 2 (wn)?\nqe de \\du.) * du due 2 \u2014 pe)?\u201d\nThe derivative of the composed probability with respect to the levels of expertise is\ndp weE\u2014Sloewpe \u2014 be = be\nde; E2 BE\u2019\n(=H\n\n1 1\n5 \u2014 log(v) \u2014 5 log(2n),\n\nf(u.v) = -\nCk\nw= >orepe, v= Sore(oe + oR) \u2014 2, Th =\n\nk k De en\nd\nfs\na2\nal\n=0\n\ndir\nlk\ndh\nLb\ne see that d? f /du2 < 0 for pz \u20ac (0,1), ie., the log-likelihood is a strictly concave function of ju;\ndf _ of du\ndex ~ du dex.\nThe derivative of the log-likelihood with respect to the composed mean and variance are\naf\n\nrp\n\ndf\n\n(a \u2014 p)* 1\n\n(a \u2014 ps)\" \u2014v\n\ndu\n\nVv\n\ndu\n\nQu? 2u\n\nQv2\nThe derivative of the composed mean and variance with respect to the levels of expertise are\ndu rE \u2014 Diente \u2014 bk bh\nde, E? E\nHk\n\n\u2014 2\n\nEB\n\nUk \u2014U+ (Mk = BW)\u201d\nE ,\nFor binary data, the log-likelihood of the smoothed model is a concave function of j4z(d), see\nSection We could therefore in principal perform an optimization for the experts opinions\nusing Newton\u2019s method. There are a few complications though. One problem is that the second\nderivative is proportional to the squared responsibility and hence close to 0 if the level of expertise\nis small. Consequently, template updates in regions with low expertise would be unstable. To deal\nwith that we could add a penalty on the squared log-odds for example. Another problem is that the\nNewton steps may lead to probability estimates outside of [0,1]. This can be dealt with by pulling\nthe estimates back into the unit interval. Note that working on the log-odds scale is not possible\nbecause the log-likelihood of our model is not concave in the expert log-odds. Because of these\ncomplications we use the simple, fast and robust heuristic (@) instead of Netwon\u2019s method.\ndf df ap df dv\ndex, du dex ' du dex,"}]
HJ9rLLcxg
[{"section_index": "0", "section_name": "DATASET AUGMENTATION IN FEATURE SPACE", "section_text": "Terrance DeVries and Graham W. Taylor\nDataset augmentation, the practice of applying a wide array of domain-specific\ntransformations to synthetically expand a training set, is a standard tool in su-\npervised learning. While effective in tasks such as visual recognition, the set of\ntransformations must be carefully designed, implemented, and tested for every\nnew domain, limiting its re-use and generality. In this paper, we adopt a sim-\npler, domain-agnostic approach to dataset augmentation. We start with existing\ndata points and apply simple transformations such as adding noise, interpolating,\nor extrapolating between them. Our main insight is to perform the transformation\nnot in input space, but in a learned feature space. A re-kindling of interest in unsu-\npervised representation learning makes this technique timely and more effective.\nIt is a simple proposal, but to-date one that has not been tested empirically. Work-\ning in the space of context vectors generated by sequence-to-sequence models, we\ndemonstrate a technique that is effective for both static and sequential data."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "One of the major catalysts for the resurgence of neural networks as \u201cdeep learning\u201d was the influ\nof the availability of data. Labeled data is crucial for any supervised machine learning algorithm t\u00a2\nwork, even moreso for deep architectures which are easily susceptible to overfitting. Deep learnins\nhas flourished in a few domains (e.g. images, speech, text) where labeled data has been relativel;\nsimple to acquire. Unfortunately most of the data that is readily available is unstructured and un\nlabeled and this has prevented recent successes from propagating to other domains. In order t\nleverage the power of supervised learning, data must be manually labeled, a process which require:\ninvestment of human effort. An alternative to labeling unlabeled data is to generate new data witl\nknown labels. One variant of this approach is to create synthetic data from a simulation such as \u00ab\ncomputer graphics engine (Shotton et al.| 2013} Richter et al. 2016), however, this may not work i\nthe simulation is not a good representation of the real world domain. Another option is dataset aug\nmentation, wherein the existing data is transformed in some way to create new data that appears t\u00a2\ncome from the same (conditional) data generating distribution 2 The main chal.\nlenge with such an approach is that domain expertise is required to ensure that the newly generatec\ndata respects valid transformations (i.e. those that would occur naturally in that domain).\nIn this work, we consider augmentation not by a domain-specific transformation, but by perturb\ning, interpolating, or extrapolating between existing examples. However, we choose to operate nc\nin input space, but in a learned feature space. (2013) and [Ozair & Bengio] (2017\nclaimed that higher level representations expand the relative volume of plausible data points withi\nthe feature space, conversely shrinking the space allocated for unlikely data points. As such, whe\ntraversing along the manifold it is more likely to encounter realistic samples in feature space tha\ncompared to input space. Unsupervised representation learning models offer a convenient way c\nlearning useful feature spaces for exploring such transformations. Recently, there has been a retur\n\nto interest in such techniques, leading to, e.g., variational autoencoders (Kingma & Welling} |2014|\ngenerative adversarial networks (Goodfellow et al. , and generative stochastic networks (Alai\n\n{et al.|[2016), each of which could be used to generate useful feature spaces for augmentation.\nBy manipulating the vector representation of data within a learned feature space a dataset can be\naugmented in a number of ways. One of the most basic transformations that can be applied to the"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "data is to simply add random noise to the context vector. In the context of class-imbalanced data,\nChawla et al.|(2002) proposed interpolating between samples in feature space. Similarly extrapola-\ntion between samples could also be applied. We investigate some of these methods to see which is\nmost effective for improving the performance of supervised learning models when augmented data\nis added to the dataset.\nIn this work, we demonstrate that extrapolating between samples in feature space can be used t\naugment datasets and improve the performance of supervised learning algorithms. The main benef\nof our approach is that it is domain-independent, requiring no specialized knowledge, and can there\nfore be applied to many different types of problems. We show that models trained on datasets thi\nhave been augmented using our technique outperform models trained only on data from the orig:\nnal dataset. Just as dataset augmentation in input space has become standard for visual recognitio\ntasks, we recommend dataset augmentation in feature space as a domain-agnostic, general-purpos\nframework to improve generalization when limited labeled data is available."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "\u2018or many years, dataset augmentation has been a standard regularization technique used to reduc\nverfitting while training supervised learning models. Data augmentation is particularly popular f\nisual recognition tasks as new data can be generated very easily by applying image manipulatior\nuch as shifting, scaling, rotation, and other affine transformations. When training LeNet5, or\nf the most early and well-known convolutional neural network architectures, [LeCun et al.] (199!\npplied a series of transformations to the input images in order to improve the robustness of tl\nnodel. also used image transformations to generate new data when trainir\nne renowned AlexNet model for the 2012 Large Scale Visual Recognition Challenge (ILSVRC\n\u2018hey claimed that dataset augmentation reduced the error rate of the model by over 1%. Creatir\new data has since been a crucial component of all recent large-scale image recognition models."}, {"section_index": "4", "section_name": "3. MODEL", "section_text": "Our dataset augmentation technique works by first learning a data representation and then applying\ntransformations to samples mapped to that representation. Our hypothesis is that, due to manifolc\nunfolding in feature space, simple transformations applied to encoded rather than raw inputs wil\nresult in more plausible synthetic data. While any number of representation learning models coulc\nUnfortunately, dataset augmentation is not as straightforward to apply in all domains as it is for im-\n\nages. For example, investigated a variety of data augmentation techniques\nfor application to singing voice detection. These include adding Gaussian noise to the input, shifting\nthe pitch of the audio signal, time stretching, varying the loudness of the audio signal, applying ran-\ndom frequency filters, and interpolating between samples in input space. They found that only pitch\nshifting and random frequency filtering appeared to improve model performance. While performing\nwell on audio data, these augmentation techniques cannot be applied to other domains. As such, the\nprocess of designing, implementing, and evaluating new data augmentation techniques would need\nto be repeated for each new problem.\nImportant to our work are sequence-t uence learning (seq2seq)\nveloped independently by and\nconvert a sequence of inputs from one domain into a fixed-length context vector which is then used\nto generate an output sequence, usually from a different domain. For example, the first application\nof seq2seq learning by Cho and Sutskever was to translate between English and French. Sequence-\nto-sequence learning has recently been used to achieve state-of-the-art results on a large variety of\n\nsequence learning tasks including image captioning (Vinyals et al.|/2015b\n(2016)), machine\n2\n\n, and con-\nversational modeling (Vinyals & Le||2015). The seq2seq architecture can also be used to create\nsequence autoencoders (SA) by creating a model that learns to reconstruct input sequences in its\n\noutput (Srivastava et al.} [2015 2015). We use a variant of sequence autoencoders in our\nch w\n\nwork to create a feature space within whi e can manipulate data to augment a training set.\nr >| Decoder\nj\n\\\n1\nt G \u2019\n\\\nEncoder ' eS\nData e\nAugmentation\nEncoder cl\n1\n1 \u00a5\nf ics H Sequence\n1 Classifier\nGQ |\n! Llu u--4\nOO\nL Static\nx X) X; Classifier\n(a) Sequence autoencoder (b) Encode and apply data transform (c) Decode\n\nand/or cla\nFigure 1: System architecture composed of three steps. (a) A sequence autoencoder learns a feature\nspace from unlabeled data, representing each sequence by a context vector (C). (b) Data is encodec\nto context vectors and augmented by adding noise, interpolating, or extrapolating (here we depic\ninterpolation). (c) The resulting context vectors can either be used directly as features for supervisec\nlearning with a static classifier, or they can be decoded to reconstruct full sequences for training <\nsequence classifier.\nbe explored, we use a sequence autoencoder to construct a feature space. The main reason we adop\nSA is that we favour a generic method that can be used for either time series or static data."}, {"section_index": "5", "section_name": "3.1 SEQUENCE AUTOENCODER", "section_text": "An autoencoder consists of two parts: an encoder and a decoder. The encoder receives data as in-\nput and, by applying one or more parametrized nonlinear transformations, converts it into a new\nrepresentation, classically lower-dimensional than the original input. The decoder takes this repre-\nsentation and tries to reconstruct the original input, also by applying one or more nonlinear trans-\n\nformations. Various regularized forms of autoencoders have been proposed to learn overcomplete\nrepresentations.\nA sequence autoencoder works in a similar fashion as the standard autoencoder except that the\nencoder and decoder use one or more recurrent layers so that they can encode and decode variable-\nlength sequences. In all of our experiments, we use a stacked LSTM (Li & Wu} |2015) with two\nlayers for both the encoder and decoder (Figure[Iap. During the forward pass, the hidden states of\nthe recurrent layers are propagated through the layer stack. The encoder\u2019s hidden state at the final\ntime step, called the context vector, is used to seed the hidden state of the decoder at its first time step.\nThe main difference between our implementation of the SA and that of [Dai & Le} (2015) is how the\ncontext vector is used in the decoder. Dai and Le follow the original seq2seq approach of |Sutskever\nand use the context vector as input to the decoder only on the first time step, then use\nthe output of the previous times step as inputs for all subsequent time steps as follows:\nyo = f(S0,\u00a2)\nyt = f (St-1, \u00a5t-1, \u20ac).\nWe found that conditioning the decoder on the context vector each time step resulted in improved\nreconstructions, which we found to be critical to the success of the data augmentation process.\nyo = f(so,c)\nye = f(St-1,\u00a5t-1)\nwhere f is the LSTM function, s is the state of the LSTM (both hidden and cell state), c is the\ncontext vector, and y is the output of the decoder. We instead modify the above equation so that the\ndecoder is conditioned on the context vector at each time step as was done in (Cho et al.1/2014):\nIn order to augment a dataset, each example is projected into feature space by feeding it through\nthe sequence encoder, extracting the resulting context vector, and then applying a transformation\nin feature space (Figure[Ibp. The simplest transform is to simply add noise to the context vectors,\nhowever, there is a possibility with this method that the resulting vector may not resemble the same\nclass as the original, or even any of the known classes. In our experiments, we generate noise by\ndrawing from a Gaussian distribution with zero mean and per-element standard deviation calculated\nacross all context vectors in the dataset. We include a y parameter to globally scale the noise:\nco =a + 7X, X ~ N{0,07}\nwhere i indexes the elements of a context vector which corresponds to data points from the tr\nset. A more directed approach for data augmentation follows the techniques introduced by |C.\nFor each sample in the dataset, we find its A\u2019 nearest neighbours in feature space which\nlabel. For each pair of neighbouring context vectors, a new context vector can then be\n\nhar cla\ngenerated using interpolation:\nc!\n(ce\n\u2014c\u00a2c\ni)\n+\nIn the case of extrapolation, \\ is a value in the range {0, oo} which controls the degree of extrapola-\ntion. While could be drawn from a random distribution for each new sample we found that setting\nA = 0.5 worked well as a default value in most cases, so we use this setting in all of our tests.\nOnce new context vectors have been created, they can either be used directly as input for a learning\ntask, or they can be decoded to generate new sequences (Figure}Ic). When interpolating between\ntwo samples, the resulting decoded sequence is set to be the average length of the two inputs. When\nextrapolating between two samples the length of the new sequence is set to be the same as that of c;."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "For all classification experiments where interpolation or extrapolation was applied to generate new\nsamples, we applied the following procedure unless otherwise stated. For each sample in the dataset\nwe found the 10 nearest in-class neighbours by searching in feature space. We then interpolated o1\nextrapolated between each neighbour and the original sample to produce a synthetic example which\nwas added to the augmented dataset. For all tests, the baseline model and the augmented dataset\nmodel(s) were trained for the same number of weight updates regardless of dataset size."}, {"section_index": "7", "section_name": "4.1 VISUALIZATION - SINUSOIDS", "section_text": "To gain an intuition of the method we start by working with a synthetic dataset of sinusoids. Si-\nnusoids work well as a test case for this technique as they have a known behaviour and only two\ndimensions (amplitude and time), so we can easily observe the effects of the dataset augmentation\nprocess. To create a training set, sinusoids were generated with amplitude, frequency, and phase\ndrawn from a uniform distribution.\nFor this toy problem, we trained a sequence autoencoder with 32 hidden units in each layer. We then\napplied different data augmentation strategies to observe the effects on the \u201csynthetic\u201d sinusoids.\nwhere c\u2019 is the synthetic context vector, c; and c; are neighbouring context vectors, and \\ is a\nvariable in the range {0, 1} that controls the degree of interpolation. In our experiments, we use\nA = 0.5 so that the new sample balances properties of both original samples. In a similar fashion,\nextrapolation can also be applied to the context vectors:\nIn all experiments, we trained a LSTM-based sequence autoencoder in order to learn a feature space\nfrom the available training examples. Each hidden layer, including the context vector, had the same\nnumber of hidden units and a dropout probability of p = 0.2. The autoencoders were trained\nusing Adam 2015) with an initial learning rate of 0.001, which was reduced by half\nwhenever no improvement was observed in the validation set for 10 epochs. Finally, we reversed the\n\norder of the input sequences as suggested by Sutskever et al] 2014 Sutskever et al. (2014). We found that reversing the\norder of input sequences caused the model to train faster and achieve better final solutions.\nFor each test we extracted the context vectors of two input sinusoids, performed an operation, ther\ndecoded the resulting context vectors to generate new sequences.\nWe first augmented data by adding random noise to the context vectors before decoding. The noise\nmagnitude parameter 7 from Equation [I] was set to 0.5. In Figure [2a] the blue and green \u201cparent\u201d\nsamples are shown in bold while the augmented \u201cchild\u201d samples are thinner, lighter lines. Impor-\ntantly, we observe that all new samples are valid sinusoids with stable, regular repeating patterns.\nAlthough mimicking the major properties of their parents the generated samples have small changes\nin amplitude, frequency, and phase. as would be the expected effect for the addition of random noise.\nFor a more directed form of data augmentation we experimented with interpolating between sinu-\nsoids within the space of the context vectors. Figure [2b] demonstrates interpolation between two\nsinusoids using Equation|2|while varying the \\ parameter from 0 to 1. Unlike the results obtained\nby [Bengio et al] (2013) where the transition between classes occurs very suddenly we find that the\nsamples generated by our model smoothly transition between the two parent sinusoids. This is an\nexciting observation as it suggests that we can control characteristics of the generated samples by\ncombining two samples which contain the desired properties.\nIn a similar fashion to interpolation we can also extrapolate between two samples using Equation\nFor this experiment we again vary the \\ parameter from 0 to | to generate a range of samples.\nAs seen in Figure this appears to have the effect of exaggerating properties of each sinusoid\nwith respect to the properties of the other sinusoid. For example, we see that new samples generated\nfrom the blue parent sinusoid increase in amplitude and decrease in phase shift. Conversely, samples\ngenerated from the green parent sinusoid decrease in amplitude and increase in phase shift. The\nbehaviour of the extrapolation operation could prove very beneficial for data augmentation as it\ncould be used to generate extra samples of rare or underrepresented cases within the dataset, which\nis acommon failure case.\nThe UJI Pen Characters dataset (v2) contains 11,640 instances of 97 different characters hand-\nwritten by 60 participants {2008). All samples were collected using a tablet PC\nand a stylus. Characters are defined by a sequence of X and Y coordinates, and include upper anc\nlower case ASCII letters, Spanish non-ASCII letters, the 10 digits, and other common punctuatior\nand symbols. As with the sinusoids in Section[4.1] handwritten characters are suitable for evaluating\ndataset augmentation methods as they have an expected shape and can be easily visualized.\nAs a preprocessing step for this dataset we first applied local normalization to each sample to get a\nfixed size, followed by a global normalization across the dataset as a whole. A sequence autoencoder\nwith 128 hidden units per layer was trained to construct the feature space within which data aug-\nmentation could take place. Figure [3a] demonstrates the effects of interpolating between characters\nin feature space. In this example we use the \u201c\u201c@\u201d symbol. We see that the resulting characters share\n(b) Interpolation (c) Extrapolation\n\n(a) Random noise\nFigure 2: Sinusoids with various transformations applied in feature space. (a) Random noise added\nwith 7 = 0.5. (b) Interpolation between two sinusoids for values of \\ between 0 and 1. (c) Extrap-\nolation between two sinusoids for values of \\ between 0 and 1. Best viewed in colour.\nFigure 3: Interpolation (a) and extrapolation (b) between handwritten characters. Character (0,i) is\ninterpolated/extrapolated with character (j,0) to form character (i,j), where i is the row number and j\nis the column number. Original characters are shown in bold.\ncharacteristics of the two parent inputs, such as the length of the symbol\u2019s tail or the shape of the\ncentral \u201ca\u201d. Visually the majority of generated samples appear very similar to their parents, which is\nonpected | from interpolation, but is not necessarily useful from the perspective of data augmentation.\nWhen augmenting data for the purpose of improving performance of machine learning algorithms it\nis desirable to create samples that are different from the data that is already common in the dataset.\nTo this end, extrapolating between samples is preferable, as shown in Figure [3b] Extrapolated data\ndisplays a wider variety compared to samples created by interpolation. We hypothesize that it is this\nadded variability that is necessary in order for data augmentation to be useful."}, {"section_index": "8", "section_name": "4.3 SPOKEN ARABIC DIGITS", "section_text": "For our first quantitative test we use the Arabic Digits dataset ( 3) which contains 8,80(\nsamples of time series mel-frequency cepstrum coefficients (MFCCs) extracted from audio clips o:\nspoken Arabic digits. Thirteen MFCCs are available for each time step in this dataset. To preproces:\nthe data we apply global normalization. To evaluate our data augmentation techniques we used the\nofficial train/test split and trained ten models with different random weight initializations.\nAs a baseline model we trained a simple two layer MLP on the context vectors produced by a SA.\nBoth models used 256 hidden units in each hidden layer. The MLP applied dropout with p = 0.5\nafter each dense layer. To evaluate the usefulness of different data augmentation techniques we\ntrained a new baseline model on datasets that had been augmented with newly created samples.\nThe techniques we evaluated were: adding random noise to context vectors, interpolating between\ntwo random context vectors from the same class, interpolating between context vectors and their\nnearest neighbours from the same class, and extrapolating between context vectors and their nearest\nneighbours from the same class. The results of our tests are summarized in Table/1]\nTable 1: Test set error on Arabic Digits dataset averaged over 10 runs\nOVwUvs\nCRCROMG\nORCRORG\nCEE RCRG)\nCRCRORG)\n\nCROC)\n\nCRCRCEG\nYVYO\nORCEGEG)\nCRCECEG\nCRICRORG\n\nCORO)\n\n(b) Extrapolation\n\n(a) Interpolation\nWe find that our simple baseline model achieves competitive performance after training on the\nextracted context vectors, demonstrating the feature extracting capability of the sequence autoen-\ncoder. The naive data augmentation approach of adding random noise to the context vectors further\nimproves performance. Of interest, we find that adding new samples generated using interpolation\ntechniques diminishes the performance of the model, which confirms our hypothesis that good data\naugmentation techniques should add variability to the dataset. Of the two interpolation techniques.\nOur second quantitative test was conducted on the Australian Sign Language Signs dataset (AUS-\nLAN). AUSLAN was produced by {[Kadous] (2002) and contains 2,565 samples of a native signer\nsigning 95 different words or phrases while wearing high quality position tracking gloves. Each\ntime series sample is, on average, 57 frames in length and includes 22 features: roll, pitch, yaw,\nfinger bend, and the 3D coordinates of each hand. To preprocess the raw data we first locally centre\neach sample and then apply global normalization. For evaluation, we perform cross validation with\n5 folds, as is common practice for the AUSLAN dataset.\nThe baseline model for these tests was a two layer MLP with 512 hidden units in each layer, with\ndropout (p = 0.5) applied on each. Similar to Arabic Digits, dataset we find that the simple MLI\ncan achieve competitive results when trained on the context vectors extracted from the sequence au\ntoencoder (see Table 2). In this case, however, we observe that adding random noise to the contex\nvectors did not improve performance. One possible explanation for this outcome is that the AUS\nLAN dataset has much more classes than the Arabic Digits dataset (95 versus 10) so there is highe\nprobability of a randomly augmented context vector jumping from one class manifold to another\nTraversing instead along the representational manifold in a directed manner by extrapolating be\ntween neighbouring samples results in improved eT OS) over that of the baseline model. Ou\n\nresults also match the performance of {Rodriguez et al.| Rodriguez et al}(2005) , which to our knowledge is the bes\n\n5-fold cross validation result for the AUSLAN dataset."}, {"section_index": "9", "section_name": "4.5 UCFKINECT", "section_text": "The final time series dataset we considered was the UCF Kinect action recognition dataset (EIlis\n3). It contains motion capture data of participants performing 16 different actions suct\nas run, kick, punch, and hop. The motion capture data consists of 3-dimensional coordinates fo!\n15 skeleton joints for a total of 45 attributes per frame. In total there are 1,280 samples within the\ndataset. To preprocess the dataset we first shift the coordinates of each sample so that the centra\nshoulder joint of the first frame is located at the origin. Global normalization is also applied.\nWith the UCFKinect dataset our main goal was to determine the effectiveness of interpolation 1\nfeature space for generating new sequences that combine the characteristics and actions of the tw\n\u201cseed\u201d examples. We found that in order to produce natural looking results, the two actions to b\ncombined must already share some properties. For example, Figure|4a]and|4b]show motion captur\nsequences of a person stepping forward and a person stepping to the left, respectively. Both of thes\nactions take approximately the same amount of time to perform, and each skeleton moves thei\nleft leg first, then their right leg. Due to these preexisting similarities the action sequences can b\ninterpolated in feature space to produce a natural looking sequence of a skeleton stepping diagonall\n\nforward and to the left (Figure These results emulate what was previously observed in Sectio\nIA 2) which indicated that cimil \u2018Nneartiec are necaceary far cuccaccfi] hlanding af evamniec\nwe see that interpolating between neighbouring samples performs better than simply interpolating\nwith randomly chosen samples of the same class. Finally we observe that extrapolating betweer\nsamples improves model performance significantly, reducing the baseline error rate by almost half\n\nOur results rival those of (2012), which to our knowledge are state-of-the-art on\n\nthis dataset.\nTable 2: CV error on AUSLAN dataset averaged over 5 folds\nOur secondary goal with the UCFKinect dataset was to quantitatively evaluate the performance\nof extrapolation-based data augmentation. To compare to previous results, we used 4-fold cross\nvalidation (see Table[3]for a summary of results). We found that extrapolating between samples in\n(c) Generated sequence combining \u201d\u2019step front\u201d and \u2019\u2019step left\u201d.\nTable 3: CV error on UCFKinect dataset averaged over 4 folds\nHaving successfully applied dataset augmentation in feature space to improve the accuracy of se-\nquence classification tasks, we now experiment with applying our technique to static data. For\nthese experiments we concentrate on the image domain where manual data augmentation is already\nprevalent. We find that augmenting datasets by extrapolating within a learned feature space improves\nclassification accuracy compared to no data augmentation, and in some cases surpasses traditional\n(manual) augmentation in input space.\nIn our experiments we consider two commonly used small-scale image datasets: MNIST and\nCIFAR-10. MNIST consists of 28x28 greyscale images containing handwritten digits from 0 to\n9. There are 60,000 training images and 10,000 test images in the official split. CIFAR-10 consists\nof 32 x32 colour images containing objects in ten generic object categories. This dataset is typically\nsplit into 50,000 training and 10,000 test images.\nIn all of our image experiments, we apply the same sequence autoencoder (SA) architecture as shown\nin Figure [Ia]to learn a representation. No pre-processing beyond a global scaling is applied to the\nMNIST dataset. For CIFAR-10 we apply global normalization and the same crop and flip operations\nFigure 4: A new motion capture sequence can be generated by interpolating between samples. By\ncombining the \u2019step front\u201d action (a) with the step left\u2019 action (b) we can generate a new sequence\nof a character stepping diagonally forward and the to left (c).\nrepresentational space improved the performance of our untuned model by more than 1%, which\nis quite significant. Our results are 2.5 percentage points below the current state-of-the-art result\nproduced by|Beh et al.|(2014), but further tuning of the model could improve results.\nthat|/Krizhevsky et al.|(2012) used for input space data augmentation when training AlexNet (we crop\n\nto 24x24). To simulate sequence input the images are fed into the network one row of pixels per\n\ntime step similar to the SA setup in (Dai & Le]/2015).\nFor each dataset we train a 2-layer MLP on the context vectors produced by the sequence encoder.\nBoth MLP and SA use the same number of hidden units in each layer: 256 per layer for MNIST\nand 1024 per layer for CIFAR-10. We conduct four different test scenarios on the MNIST dataset.\nTo control for the representation, as a baseline we trained the classifier only on context vectors from\nthe original images (i.e. SA with no augmentation). We then compare this to training with various\nkinds of dataset augmentation: traditional affine image transformations in input space (shifting, ro-\ntation, scaling), extrapolation between nearest neighbours in input space, and extrapolation between\nnearest neighbours in representational space. For both extrapolation experiments we use three near-\nest neighbours per sample and 7 = 0.5 when generating new data. For CIFAR-10, our baseline is\ntrained using context vectors extracted from cropped and flipped images. Against this baseline we\ntest the addition of extrapolation between nearest neighbours in representational space, using the\nsame setup as the MNIST test. Due to the size of the datasets we apply an approximate nearest\n\nneighbour algorithm\nResults are reported in Table [4] For MNIST, we find that extrapolating in feature space not only\nperforms better than the baseline, but it also achieves a lower error rate compared to domain-specific\ndata augmentation in input space. A similar outcome is observed in CIFAR-10, where feature space\nextrapolation reduces error rate by 0.3%. Interestingly, we note that the baseline test for this dataset\nalready leveraged image transformations to improve performance, so the additional reduction in\nerror rate could indicate that both kinds of augmentation, extrapolation in feature space and manual\ntransformation in pixel space, could complement each other.\nTable 4: Test error (%) on MNIST and CIFAR-10. Averages over 10 and 5 runs, respectively"}, {"section_index": "10", "section_name": "5 CONCLUSION", "section_text": "In this paper, we demonstrate a new domain-independent data augmentation technique that ca1\nbe used to improve performance when training supervised learning models. We train a sequenc\nautoencoder to construct a learned feature space in which we extrapolate between samples. Thi\ntechnique allows us to increase the amount of variability within the dataset, ultimately resulting i\na more robust model. We demonstrate our technique quantitatively on five datasets from differen\ndomains (speech, sensor processing, motion capture, and images) using the same simple architectur\nand achieve near state-of-the-art results on two of them. Moreover, we show that data augmentatiot\nin feature space may complement domain-specific augmentation.\nAn important finding is that the extrapolation operator, when used in feature space, generated useft\nsynthetic examples while noise and interpolation did not. Additional synthetic data experiment\nwhere we could control the complexity of the decision boundary revealed that extrapolation onl\nmproved model performance in cases where there were complex class boundaries. In cases wit\nsimple class boundaries, such as linear separability or one class encircling another, extrapolatio\n1indered model performance, while interpolation helped. Our current hypothesis is that interpolz\nion tends to tighten class boundaries and unnecessarily increase confidence, leading to overfitting\n[his behaviour may cause the model to ignore informative extremities that can describe a comple\nlecision boundary and as a result produce an unnecessarily smooth decision boundary. As mos\n1igh-dimensional, real datasets will typically have complex decision boundaries, we find extrapolz\nion to be well suited for feature space dataset augmentation."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Yoshua Bengio, Gr\u00e9goire Mesnil, Yann Dauphin, and Salah Rifai. Better mixing via deep represen:\ntations. In JICML (1), pp. 552-560, 2013.\nNitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. Smote: synthetic\nminority over-sampling technique. Journal of Artificial Intelligence Research, 16:321\u2014357, 2002\nKyunghyun Cho, Bart van Merri\u00e9nboer, Calar Giilgehre, Dzmitry Bahdanau, Fethi Bougares, Holg\nSchwenk, and Yoshua Bengio. Learning phrase representations using rn encoder\u2014decoder f\nstatistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods\nNatural Language Processing (EMNLP). pp. 1724-1734. 2014.\nChris Ellis, Syed Zain Masood, Marshall F Tappen, Joseph J Laviola Jr, and Rahul Sukthankar.\nExploring the trade-off between accuracy and observational latency in action recognition. Inter-\nnational Journal of Computer Vision, 101(3):420\u2014436, 2013.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair\nAaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor\nmation Processing Systems, pp. 2672\u20142680, 2014.\nNacereddine Hammami, Mouldi Bedda, and Nadir Farah. Spoken Arabic digits recognition using\nMECC based on GMM. In Sustainable Utilization and Development in Engineering and Tech-\nnology (STUDENT), 2012 IEEE Conference on, pp. 160-163. IEEE, 2012.\nS\u00e9bastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large\ntarget vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting\nof the Association for Computational Linguistics and the 7th International Joint Conference or\nNatural Language Processing, pp. 1-10, 2015.\nMohammed Waleed Kadous. Temporal classification: Extending the classification paradigm to\nmultivariate time series. PhD thesis, The University of New South Wales. 2002.\nMoshe Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.\nJuan Jos\u00e9 Rodriguez, Carlos J Alonso, and Jos\u00e9 A Maestro. Support vector machines of interval-\nbased features for time series classification. Knowledge-Based Systems, 18(4):171\u2014-178, 2005.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.\nIn Advances in neural information processing systems. pp. 3104\u20143112. 2014.\nYann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc\ndocument recognition. Proceedings of the IEEE, 86(11):2278\u20142324, 1998.\nXiangang Li and Xihong Wu. Constructing long short-term memory based deep recurrent neural\nnetworks for large vocabulary speech recognition. In 20/5 IEEE International Conference on\nAcoustics, Speech and Signal Processing (ICASSP). pp. 4520-4524. IEEE. 2015.\nDavid Llorens, Federico Prat, Andr\u00e9s Marzal, Juan Miguel Vilar, Maria Jos\u00e9 Castro, Juan-Carlos\nAmengual, Sergio Barrachina, Antonio Castellanos, Salvador Espafia Boquera, JA G\u00e9mez, et al.\nThe UJIpenchars database: a pen-based database of isolated handwritten characters. In LREC,\n2008.\nJamie Shotton, Toby Sharp, Alex Kipman, Andrew Fitzgibbon, Mark Finocchio, Andrew Blake,\nMat Cook, and Richard Moore. Real-time human pose recognition in parts from single depth\nimages. Communications of the ACM, 56(1):116\u2014-124, 2013.\nNitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of videc\nrepresentations using Istms. In Proceedings of the 32nd International Conference on Machine\nLearning (ICML-15), pp. 843-852, 2015.\nOriol Vinyals and Quoc Le. A neural conversational model. In International Conference on Machine\nLearnine: Deen Learnine Workshon. 2015.\nOriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. Gram-\nmar as a foreign language. In Advances in Neural Information Processing Systems, pp. 2773-\n2781, 2015a.\n\nOriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural\nimage caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern\nRecognition, pp. 3156-3164, 2015b."}]
BJC_jUqxe
[{"section_index": "0", "section_name": "A STRUCTURED SELF-ATTENTIVE\nSENTENCE EMBEDDING", "section_text": "This paper proposes a new model for extracting an interpretable sentence embed-\nding by introducing self-attention. Instead of using a vector, we use a 2-D matrix\nto represent the embedding, with each row of the matrix attending on a different\npart of the sentence. We also propose a self-attention mechanism and a special\nregularization term for the model. As a side effect, the embedding comes with an\neasy way of visualizing what specific parts of the sentence are encoded into the\nembedding. We evaluate our model on 3 different tasks: author profiling, senti-\nment classification and textual entailment. Results show that our model yields a\nsignificant performance gain compared to other sentence embedding methods in\nall of the 3 tasks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Much progress has been made in learning semantically meaningful distributed representations of\n\nindividual words, also known as word embeddings (Bengio et al.| 2001} [Mikolov et al.| 2013).\n\nOn the other hand, much remains to be done to obtain satisfying representations of phrases and\nsentences. Those methods generally fall into two categories. The first consists of universal sentence\n\nembeddings usually trained by unsupervised learning (Hill et al.}/2016). This includes SkipThought\nvectors (Kiros et al.|/2015), ParagraphVector (Le & Mikolov||2014), recursive auto-encoders (Socher\n(201 1}]2013), Sequential Denoising Autoencoders (SDAE), FastSent (Hill et al.|/2016), etc.\nThe other category consists of models trained specifically for a certain task. They are usually\ncombined with downstream applications and trained by supervised learning. One generally finds\nthat specifically trained sentence embeddings perform better than generic ones, although generic\nones can be used in a semi-supervised setting, exploiting large unlabeled corpora. Several model:\nhave been proposed along this line, by using recurrent networks (Hochreiter & Schmidhuber}|1997\nChung et al.|!2014), recursive networks (Socher et al.| 2013) and convolutional networks (Kalchbren-\nner et al.|/2014}/dos Santos & Gattil 2014} |Kim}|2014) as an intermediate step in creating sentence\nrepresentations to solve a wide variety of tasks including classification and ranking\n(2015). A common approach in previou:\nmethods consists in creating a simple vector representation by using the final hidden state of the\nRNN or the max (or average) pooling from either RNNs hidden states or convolved n-grams. Ad-\nditional works have also been done in exploiting linguistic structures such as parse and dependence\n\ntrees to improve sentence representations (Ma et al.|{2015}/Mou et al.|[2015b}|Tai et al.|/2015).\nFor some tasks people propose to use attention mechanism on top of the CNN or LSTM model to\nintroduce extra source of information to guide the extraction of sentence embedding (dos Santos\n. However, for some other tasks like sentiment classification, this is not directly appli-\nc since there is no such extra information: the model is only given one single sentence as input.\nIn those cases, the most common way is to add a max pooling or averaging step across all time steps\n\u201cThis work has been done during the 1st author\u2019s internship with IBM Watson."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "softmax,\n\n(b)\nFigure 1: A sample model structure showing the sentence embedding model combined with a fully\nconnected and softmax layer for sentiment analysis (a). The sentence embedding M is computed as\nmultiple weighted sums of hidden states from a bidirectional LSTM (hy, ..., hn), where the summa-\ntion weights (Aj1,..., Ain) are computed in a way illustrated in (b). Blue colored shapes stand fot\nhidden representations, and red colored shapes stand for weights, annotations, or input/output."}, {"section_index": "3", "section_name": "(Lee & Dernoncourt} |2016), or just pick up the hidden representation at the last time step as th\nencoded embedding (Margarit & Subramaniam||2016).", "section_text": "Section [2] details on our proposed self-attentive sentence embedding model, as well as a regular-\nization term we proposed for this model, which is described in Sectio We also provide a\nvisualization method for this sentence embedding in section]2.3| We then evaluate our model in\nauthor profiling, sentiment classification and textual entailment tasks in Section|4]"}, {"section_index": "4", "section_name": "2.1 MODEL", "section_text": "The proposed sentence embedding model consists of two parts. The first part is a bidirectiona\nLSTM, and the second part is the self-attention mechanism, which provides a set of summatior\nweight vectors for the LSTM hidden states. These set of summation weight vectors are dottec\nwith the LSTM hidden states, and the resulting weighted LSTM hidden states are considered a:\nan embedding for the sentence. It can be combined with, for example, a multilayer perceptron tc\nA common approach in many of the aforementioned methods consists of creating a simple vector\nrepresentation by using the final hidden state of the RNN or the max (or average) pooling from\neither RNNs hidden states or convolved n-grams. We hypothesize that carrying the semantics along\nall time steps of a recurrent model is relatively hard and not necessary. We propose a self-attention\nmechanism for these sequential models to replace the max pooling or averaging step. Different from\nprevious approaches, the proposed self-attention mechanism allows extracting different aspects of\nthe sentence into multiple vector representations. It is performed on top of an LSTM in our sentence\nembedding model. This enables attention to be used in those cases when there are no extra inputs. In\naddition, due to its direct access to hidden representations from previous time steps, it relieves some\nlong-term memorization burden from LSTM. As a side effect coming together with our proposed\nself-attentive sentence embedding, interpreting the extracted embedding becomes very easy and\nexplicit.\nbe applied on a downstream application. Figure [I] shows an example when the proposed sentence\nembedding model is applied to sentiment analysis, combined with a fully connected layer and a\nsoftmax layer. Besides using a fully connected layer, we also proposes an approach that prunes\nweight connections by utilizing the 2-D structure of matrix sentence embedding, which is detailed\nin Appendix [A] For this section, we will use Figure[T]to describe our model.\nSuppose we have a sentence, which has n tokens, represented in a sequence of word embedding:\nhi = LSTM (wi, a2)\nhy = LSTM (wi, haar)\na = softmax (wsatanh (W.1H*))\nThis vector representation usually focuses on a specific component of the sentence, like a special s\u00a2\nof related words or phrases. So it is expected to reflect an aspect, or component of the semantics i\na sentence. However, there can be multiple components in a sentence that together forms the overa\nsemantics of the whole sentence, especially for long sentences. (For example, two clauses linke\ntogether by an \u201d\u2019and.\u2019\u201d) Thus, to represent the overall semantics of the sentence, we need multiple m\u2019\nthat focus on different parts of the sentence. Thus we need to perform multiple hops of attentior\nSay we want r different parts to be extracted from the sentence, with regard to this, we extend th\nWs2 into ar-by-d, matrix, note it as W., and the resulting annotation vector a becomes annotatio\nmatrix A. Formally.\nA= softmax (W.2tanh (W.1H*))\nThe embedding vector m then becomes an r-by-2u embedding matrix 14. We compute the r\nweighted sums by multiplying the annotation matrix A and LSTM hidden states H, the resulting\nmatrix is the sentence embedding:\nS = (Wi, W2,-*: Wn)\nHere w; is a vector standing for a d dimentional word embedding for the 7-th word in the sentence.\nS is thus a sequence represented as a 2-D matrix, which concatenates all the word embeddings\ntogether. S should have the shape n-by-d.\nNow each entry in the sequence S are independent with each other. To gain some dependency be-\nween adjacent words within a single sentence, we use a bidirectional LSTM to process the sentence:\nH = (hy, hg,--- hy)\nOur aim is to encode a variable length sentence into a fixed size embedding. We achieve that by\nchoosing a linear combination of the n LSTM hidden vectors in H. Computing the linear combina-\ntion requires the self-attention mechanism. The attention mechanism takes the whole LSTM hidden\nstates H as input. and outputs a vector of weights a:\nHere W,, is a weight matrix with a shape of d,-by-2u. and wg,g is a vector of parameters with\nsize da, where d, is a hyperparameter we can set arbitrarily. Since H is sized n-by-2u, the anno-\ntation vector a will have a size n. the softmaz() ensures all the computed weights sum up to 1.\nThen we sum up the LSTM hidden states H according to the weight provided by a to get a vector\n\nrepresentation m of the input sentence.\nHere the softmax() is performed along the second dimension of its input. We can deem Equation\n[\u00e9las a 2-layer MLP without bias, whose hidden unit numbers is d,,, and parameters are {W.2, W.1}.\nThe embedding matrix / can suffer from redundancy problems if the attention mechanism always\nprovides similar summation weights for all the r hops. Thus we need a penalization term to encour-\nage the diversity of summation weight vectors across different hops of attention.\nThe best way to evaluate the diversity is definitely the Kullback Leibler divergence between any |\nof the summation weight vectors. However, we found that not very stable in our case. We conjectur\nit is because we are maximizing a set of KL divergence (instead of minimizing only one, which i\nthe usual case), we are optimizing the annotation matrix A to have a lot of sufficiently small o\neven zero values at different softmax output units, and these vast amount of zeros is making th\ntraining unstable. There is another feature that KL doesn\u2019t provide but we want, which is, we wan\neach individual row to focus on a single aspect of semantics, so we want the probability mass in th\nannotation softmax output to be more focused. but with KL penalty we cant encourage that.\nWe hereby introduce a new penalization term which overcomes the aforementioned shortcomings.\nCompared to the KL divergence penalization, this term consumes only one third of the computation.\nWe use the dot product of A and its transpose, subtracted by an identity matrix, as a measure of\nredundancy.\nLet\u2019s consider two different summation vectors a\u2018 and aj in A. Because of the softmax, all entries\nwithin any summation vector in A should sum up to 1. Thus they can be deemed as probability\nmasses in a discrete probability distribution. For any non-diagonal elements a;;(i 4 j) in the AAT\nmatrix, it corresponds to a summation over elementwise product of two distributions:\nn\n\n0< ay = So a,a, <1\nwhere ai, and aj, are the k-th element in the ai and aJ vectors, respectively. In the most extreme case,\nwhere there is no overlap between the two probability distributions ai and aj, the correspond a; will\nbe 0. Otherwise, it will have a positive value. On the other extreme end, if the two distributions are\nidentical and all concentrates on one single word, it will have a maximum value of 1. We subtract\nan identity matrix from AA\u201d so that forces the elements on the diagonal of AA\u2122 to approximate\n1, which encourages each summation vector ai to focus on as few number of words as possible,\nforcing each vector to be focused on a single aspect, and all other elements to 0, which punishes\nredundancy between different summation vectors."}, {"section_index": "5", "section_name": "2.3. VISUALIZATION", "section_text": "The interpretation of the sentence embedding is quite straight forward because of the existence of\nannotation matrix A. For each row in the sentence embedding matrix M, we have its corresponding\nannotation vector a\u2018, Each element in this vector corresponds to how much contribution the LSTM\nhidden state of a token on that position contributes to. We can thus draw a heat map for each row o!\nthe embedding matrix / This way of visualization gives hints on what is encoded in each part o!\nthe embedding, adding an extra layer of interpretation. (See Figure/3aland|3b).\nThe second way of visualization can be achieved by summing up over all the annotation vectors\nand then normalizing the resulting weight vector to sum up to 1. Since it sums up all aspects o!\nsemantics of a sentence, it yields a general view of what the embedding mostly focuses on. We car\nfigure out which words the embedding takes into account a lot, and which ones are skipped by the\nembedding. See Figure[3c]and|3d]\nVarious supervised and unsupervised sentence embedding models have been mentioned in Sectiot\nDifferent from those models, our proposed method uses a new self-attention mechanism tha\nallows it to extract different aspects of the sentence into multiple vector-representations. The matri>\nstructure together with the penalization term gives our model a greater capacity to disentangle the\nlatent information from the input sentence. We also do not use linguistic structures to guide ow:\nsentence representation model. Additionally, using our method we can easily create visualization:\nthat can help in the interpretation of the learned representations.\nP=|\\(AAT-DII-\nHere ||e|| ,. stands for the Frobenius norm of a matrix. Similar to adding an L2 regularization term,\nthis penalization term P will be multiplied by a coefficient, and we minimize it together with the\noriginal loss, which is dependent on the downstream application.\nSome recent work have also proposed supervised methods that use intra/self-sentence attention. [Ling\nproposed an attention based model for word embedding, which calculates an attention\nweight for each word at each possible position in the context window. However this method cannot\nbe extended to sentence level embeddings since one cannot exhaustively enumerate all possible\nsentences. proposes a sentence level attention which has a similar motivation but\ndone differently. They utilize the mean pooling over LSTM states as the attention source, and use\nthat to re-weight the pooled vector representation of the sentence.\nApart from the previous 2 variants, we want to note that proposed a same sell!\nattention mechanism for question encoding in their factoid QA model, which is concurrent to ou\nwork. The difference lies in that their encoding is still presented as a vector, but our attentior\nproduces a matrix representation instead, with a specially designed penalty term. We applied the\nmodel for sentiment anaysis and entailment, and their model is for factoid QA.\ntion mechanism, which is later used by We see our attention and theirs as havin;\ndifferent granularities. LSTMN produces an attenti ctor for each of its hidden states during the\nrecurrent iteration, which is sort of an online updating\u201d attention. It\u2019s more fine-grained, targetins\nat discovering lexical correlations between a certain word and its previous words. On the contrary\nour attention mechanism is only performed once, focuses directly on the semantics that makes sens\u00a2\nfor discriminating the targets. It is less focused on relations between words, but more on the seman:\ntics of the whole sentence that each word contributes to. Computationally, our method also scales uf\nwith the sentence length better, since it doesn\u2019t require the LSTM to compute an annotation vecto!\nover all of its previous words each time when the LSTMN computes its next step.\n\nThe LSTMN model (Cheng et al.||2016) also proposed a very successful intra-sentence level atten.\nParikh et al."}, {"section_index": "6", "section_name": "4 EXPERIMENTAL RESULTS", "section_text": "We first evaluate our sentence embedding model by applying it to 3 different datasets: the Ag\ndataset, the Yelp dataset, and the Stanford Natural Language Inference (SNLI) Corpus. These <\ndatasets fall into 3 different tasks, corresponding to author profiling, sentiment analysis, and tex\ntual entailment, respectively. Then we also perform a set of exploratory experiments to validate\nproperties of various aspects for our sentence embedding model."}, {"section_index": "7", "section_name": "4.1 AUTHOR PROFILING", "section_text": "We compare our model with two baseline models: biLSTM and CNN. For the two baseline models.\nThe biLSTM model uses a bidirectional LSTM with 300 dimensions in each direction, and use max\npooling across all LSTM hidden states to get the sentence embedding vector, then use a 2-layer\nReLU output MLP with 3000 hidden states to output the classification result. The CNN model\nuses the same scheme, but substituting biLSTM with 1 layer of 1-D convolutional network. During\ntraining we use 0.5 dropout on the MLP and 0.0001 L2 regularization. We use stochastic gradient\ndescent as the optimizer, with a learning rate of 0.06, batch size 16. For biLSTM, we also clip the\nnorm of gradients to be between -0.5 and 0.5. We searched hyperparameters in a wide range and\nfind the aforementioned set of hyperparameters yields the highest accuracy.\nFor our model, we use the same settings as what we did in biLSTM. We also use a 2-layer ReLU\noutput MLP, but with 2000 hidden units. In addition, our self-attention MLP has a hidden layer with\n350 units (the d, in Section 2}. we choose the matrix embedding to have 30 rows (the r), and a\ncoefficient of 1 for the penalization term.\n\u2018http: //pan.webis.de/clefl6/panl6-web/author-profiling. html\n\u2018http: //pan.webis.de/clef16/panl16-web/author-profiling.html\nThe Author Profiling datasef'|consists of Twitter tweets in English, Spanish, and Dutch. For some of\nthe tweets, it also provides an age and gender of the user when writing the tweet. The age range are\nsplit into 5 classes: 18-24, 25-34, 35-49, 50-64, 65+. We use English tweets as input, and use those\ntweets to predict the age range of the user. Since we are predicting the age of users, we refer to it\nas Age dataset in the rest of our paper. We randomly selected 68485 tweets as training set, 4000 for\ndevelopment set, and 4000 for test set. Performances are also chosen to be classification accuracy.\nTable 1: Performance Comparision of Different Models on Yelp and Age Dataset\nWe train all the three models until convergence and select the corresponding test set performance\naccording to the best development set performance. Our results show that the model outperforms\nboth of the biLSTM and CNN baselines by a significant margin.\nFigure 2: Heatmap of Yelp reviews with the two extreme score."}, {"section_index": "8", "section_name": "4.2 SENTIMENT ANALYSIS", "section_text": "We choose the Yelp datase{\u201c|for sentiment analysis task. It consists of 2.7M yelp reviews, we take\nthe review as input and predict the number of stars the user who wrote that review assigned to the\ncorresponding business store. We randomly select 500K review-star pairs as training set, and 2000\nfor development set, 2000 for test set. We tokenize the review texts by Stanford tokenizer. We us\u00a2\n100 dimensional word2vec as initialization for word embeddings, and tune the embedding durin;\ntraining across all of our experiments. The target number of stars is an integer number in the rang\u00a2\nof [1,5], inclusive. We are treating the task as a classification task, i.e., classify a review text int\none of the 5 classes. We use classification accuracy as a measurement.\nFor the two baseline models, we use the same setting as what we used for Author Profiling dataset,\nexcept that we are using a batch size of 32 instead. For our model, we are also using the same\nsetting, except that we choose the hidden unit numbers in the output MLP to be 3000 instead. We\nalso observe a significant performance gain comparining to the two baselines. (Table/T)\nAs an interpretation of the learned sentence embedding, we use the second way of visualizatio\ndescribed in Section [2.3] to plot heat maps for some of the reviews in the dataset. We randoml\nselect 5 examples of negative (1 star) and positive (5 stars) reviews from the test set, when the mode\nhas a high confidence (> 0.8) in predicting the label. As shown in Figure [2] we find that the mode\nmajorly learns to capture some key factors in the review that indicate strongly on the sentimer\nbehind the sentence. For most of the short reviews, the model manages to capture all the key factor\nthat contribute to an extreme score, but for longer reviews, the model is still not able to capture a\nrelated factors. For example, in the 3rd review in Figure}2b), it seems that a lot of focus is spent o\none single factor, i.e., the \u2019so much fun\u201d, and the model puts a little amount of attention on othe\nkey points like highly recommend\u201d, amazing food\u201d, etc."}, {"section_index": "9", "section_name": "4.3. TEXTUAL ENTAILMENT", "section_text": "We use the biggest dataset in textual entailment, the SNLI corpus\n\nevaluation on this task. SNLI is a collection of 570k human-written English sentence pairs manually\nlabeled for balanced classification with the labels entailment, contradiction, and neutral. The mode\nwill be given a pair of sentences, called hypothesis and premise respectively, and asked to tell if the\nsemantics in the two sentences are contradicting with each other or not. It is also a classificatior\ntask, so we measure the performance by accuracy.\nWe process the hypothesis and premise independently, and then extract the relation between the two\nsentence embeddings by using multiplicative interactions proposed in|Memisevic] (2013) (see Ap-\npendix|B]for details), and use a 2-layer ReLU output MLP with 4000 hidden units to map the hidden\nrepresentation into classification results. Parameters of biLSTM and attention MLP are shared across\nhypothesis and premise. The biLSTM is 300 dimension in each direction, the attention MLP has\n150 hidden units instead, and both sentence embeddings for hypothesis and premise have 30 rows\n(the r). The penalization term coefficient is set to 0.3. We use 300 dimensional GloVe (Pennington\n[2014) word embedding to initialize word embeddings. We use AdaGrad as the optimizer,\nwith a learning rate of 0.01. We don\u2019t use any extra regularization methods, like dropout or L2\nnormalization. Trainine converges after 4 epochs. which is relatively fast.\nThis task is a bit different from previous two tasks, in that it has 2 sentences as input. There are\na bunch of ways to add inter-sentence level attention, and those attentions bring a lot of benefits.\nTo make the comparison focused and fair, we only compare methods that fall into the sentence\nencoding-based models. i.e., there is no information exchanged between the hypothesis and premise\nbefore they are encoded into some distributed encoding.\nTable 2: Test Set Performance Compared to other Sentence Encoding Based Methods in SNLI Datse\nWe find that compared to other published approaches, our method shows a significant gain (> 1%)\nto them, except for the 300D NSE encoders, which is the state-of-the-art in this category. However,\nthe 0.2% different is relatively small compared to the differences between other methods.\nIn this subsection we are going to do a set of exploratory experiments to study the relative effect of\neach component in our model."}, {"section_index": "10", "section_name": "4.4.1 EFFECT OF PENALIZATION TERM", "section_text": "ifs an interesting phenomena . Not sure what the spammers get from\nit . If you comment on Fastco you will get a lot of mail-replies spam .\n5S an interesting phenomena . Not sure what the spammers get from\nit . If you comment on Fastco you will get a lot of mail-replies Spam |\n\u00a7 an interesting phenomena . Not sure what the spammers get from\nit . If you comment on Fastco you will get a lot of mail-replies spam .\nIS an interesting phenomena . Not sure what the spammers get from\nit . If you comment on Fastco you will get a lot of Mailfepli\u00e9s|Spam!\nif S an interesting phenomena . Not sure what the spammers get from\nit. If you comment on Fastco you will get a lot of mail-teplies spam .\nWe nave a great work dinner here there be about <U us and the Stati\ndo a great job time the course the food|/BS\\iGIinGlextaordinan |\norder the New York strip the meat can have use a little more\nmarbling the cornbread we get before the salad be the good thing |\neat the whole night 1 annoying thing at this place be the butter be sc\nhard / cold you can not use it on the soft bread get with it\nthis place (S@(GRSai fF lunch / dinner happy hour too the staff be very\nnice and helpful my new spot\n\u2018staff - helpful attentive portion huge enough for 2\nyou get the chimichanga plate food too salty as u know when you\ncook or add anything with cheese it have it own salt no need add\nmore to the meat ... pls kill the salt and then you can taste the\n\ngoodness of the food\nFigure 4: Attention of sentence embedding on 3 different Yelp reviews. The left one is trained\nwithout penalization, and the right one is trained with 1.0 penalization.\nSince the purpose of introducing the penalization term P is majorly to discourage the redundancy\nin the embedding, we first directly visualize the heat maps of each row when the model is presented\nwith a sentence. We compare two identical models with the same size as detailed in Section [4.1]\ntrained separately on Age dataset, one with this penalization term (where the penalization coefficient\nis set to 1.0) and the other with no penalty. We randomly select one tweet from the test set and\ncompare the two models by plotting a heat map for each hop of attention on that single tweet. Since\nthere are 30 hops of attention for each model, which makes plotting all of them quite redundant, we\nonly plot 6 of them. These 6 hops already reflect the situation in all of the 30 hops.\nit''s an interesting phenomena . Not sure what the spammers get from\ni{VSMEOMIMERE OH FAStco you will get a lot of mail-replies spam .\nit's an interesting phenomena . Not sure what the spammers get from\nit. Ifyou comment on Fastco you will get a lot Offfiailsfepliestspannil\nFigure 3: Heat maps for 2 models trained on Age dataset. The left column is trained without the\npenalization term, and the right column is trained with 1.0 penalization. (a) and (b) shows detailed\nattentions taken by 6 out of 30 rows of the matrix embedding, while (c) and (d) shows the overall\nattention by summing up all 30 attention weight vectors.\nwe have a great work dinner here there be about 20 us and the staff\ndo a great job time the course the food|/BS(iGIiinGlextaordinan |\norder the New York strip the meat can have use a little more\nmarbling the cornbread we get before the salad be the good thing |\neat the whole night 1 annoying thing at this place be the butter be so\nhard / cold vou can not use it on the soft bread cet with it\nthis place|S@(GRSEN FGF lunch / dinner happy hour too the staff be very\nnice and helpful my new spot\nTable 3: Performance comparision regarding the penalization term\nFrom the figure we can tell that the model trained without the penalization term have lots of redun\ndancies between different hops of attention Figure[3a) resulting in putting lot of focus on the wor\n\u201cit\u201d (Figure which is not so relevant to the age of the author. However in the right column, th\nmodel shows more variations between different hops, and as a result, the overall embedding focuse:\n\nn \u201dmail-replies spam\u201d instead. (Figure|3\nTo validate if these differences result in performance difference, we evaluate four models trained\non Yelp and Age datasets, both with and without the penalization term. Results are shown in Table\n\nConsistent with what expected, models trained with the penalization term outperforms thei\ncounterpart trained without."}, {"section_index": "11", "section_name": "4.4.2 EFFECT OF MULTIPLE VECTORS", "section_text": "From this figure we can find that, without having multiple rows, the model performs on-par with\nits competitiors which use other forms of vector sentence embeddings. But there is significant\n14\n\nooo\namMaan\nene\n3\nES\n\u00a9\nt\nn\n\u00b0\nFy \u00b0 Fy \u00b0 re\n3 3 cS g 8\n3 3 3 3 3\nAdeanooy\n\no mo in re)\n8884 a 2\n\n6s 6 6606 3\n\u2018Adeanooy\n\n0.85\n0.80\n7\n\nEpoches\n\nEpoches\n\n(b)\n\n(a)\nFigure 5: Effect of the number of rows (r) in matrix sentence embedding. The vertical axes indicates\ntest set accuracy and the horizontal axes indicates training epoches. Numbers in the legends stand\nfor the corresponding values of r. (a) is conducted in Age dataset and (b) is conducted in SNLI\ndataset.\nFor the Yelp dataset, we also observe a similar phenomenon. To make the experiments more ex-\nplorative, we choose to plot heat maps of overall attention heat maps for more samples, instead of\nplotting detailed heat maps for a single sample again. Figure|4]shows overall focus of the sentence\nembedding on three different reviews. We observe that with the penalization term, the model tends\nto be more focused on important parts of the review. We think it is because that we are encouraging\nit to be focused, in the diagonals of matrix AA\u201d (Equation|8).\nIn SNLI dataset, although we observe that introducing the penalization term still contributes to en-\ncouraging the diversity of different rows in the matrix sentence embedding, and forcing the network\nto be more focused on the sentences, the quantitative effect of this penalization term is not so obvious\non SNLI dataset. Both models yield similar test set accuracies.\nHaving multiple rows in the sentence embedding is expected to provide more abundant information\nabout the encoded content. It makes sence to evaluate how significant the improvement can be\nbrought by r. Taking the models we used for Age and SNLI dataset as an example, we vary r from\n1 to 30 for each task, and train the resulting 10 models independently (Figure[5p. Note that when\nr = 1, the sentence embedding reduces to a normal vector form.\ndifference between having only one vector for the sentence embedding and multiple vectors. The\nmodels are also quite invariant with respect to r, since in the two figures a wide range of values\nbetween 10 to 30 are all generating comparable curves.\nIntroducing attention mechanism allows the final sentence embedding to directly access previou\nLSTM hidden states via the attention summation. Thus the LSTM doesn\u2019t need to carry every piec\nof information towards its last hidden state. Instead, each LSTM hidden state is only expected t\nprovide shorter term context information around each word, while the higher level semantics, whic!\nrequires longer term dependency, can be picked up directly by the attention mechanism. This settin,\nreliefs the burden of LSTM to carry on long term dependencies. Our experiments also support that\nas we observed that our model has a bigger advantage when the contents are longer. Further more\nthe notion of summing up elements in the attention mechanism is very primitive, it can be somethin;\nmore complex than that, which will allow more operations on the hidden states of LSTM.\nThe model is able to encode any sequence with variable length into a fixed size representation\nwithout suffering from long-term dependency problems. This brings a lot of scalability to the model\nwithout any modification, it can be applied directly to longer contents like paragraphs, articles, etc\n\nThough this is beyond the focus of this paper, it remains an interesting direction to explore as \u00a2\nfuture work.\nAs a downside of our proposed model, the current training method heavily relies on downstream\napplications, thus we are not able to train it in an unsupervised way. The major obstacle toward:\nenabling unsupervised learning in this model is that during decoding, we don\u2019t know as prior how\nthe different rows in the embedding should be divided and reorganized. Exploring all those possible\ndivisions by using a neural network could easily end up with overfitting. Although we can still dc\nunsupervised learning on the proposed model by using a sequential decoder on top of the sentence\nembedding, it merits more to find some other structures as a decoder."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to acknowledge the developers of Theano (Theano Development Team\n2019) and Lasagne. The first author would also like to thank IBM Watson for providing resources\nundings and valuable discussions to make this project possible, and Caglar Gulcehre for helpfu\ndiscussions."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Yoshua Bengio, R\u00e9jean Ducharme, and Pascal Vincent. A neural probabilistic language model. In\nAdvances in Neural Information Processing Systems. pp. 932\u2014938. 2001.\nSamuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large anno:\ntated corpus for learning natural language inference. arXiv preprint arXiv: 1508.05326, 2015.\nSamuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, and\nChristopher Potts. A fast unified model for parsing and sentence understanding. arXiv preprint\narXiv: 1603.06021, 2016.\nJianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine\nreading. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Asso-\nciation for Computational Linguistics, 2016.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of\ngated recurrent neural networks on sequence modeling. arXiv preprint arXiv: 1412.3555, 2014.\nIn this paper, we introduced a fixed size, matrix sentence embedding with a self-attention mecha-\nnism. Because of this attention mechanism, there is a way to interpret the sentence embedding in\ndepth in our model. Experimental results over 3 different tasks show that the model outperforms\nother sentence embedding models by a significant margin.\nMinwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, and Bowen Zhou. Applying deep learn-\ning to answer selection: a study and an open task. In 20/5 IEEE Workshop on Automatic Speech\nRecognition and Understanding, ASRU 2015, Scottsdale, AZ, USA, December 13-17, 2015, pp.\n813-820, 2015.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 9(8):\n1735-1780, 1997.\nNal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for\nmodelling sentences. arXiv preprint arXiv: 1404.2188, 2014.\nYang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. Learning natural language inference using\nbidirectional LSTM model and inner-attention. CoRR, abs/1605.09090, 2016a.\nYang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. Learning natural language inference using\nbidirectional Istm model and inner-attention. arXiv preprint arXiv: 1605.09090, 2016b.\nMingbo Ma, Liang Huang, Bing Xiang, and Bowen Zhou. Dependency-based convolutional neural\nnetworks for sentence embedding. In Proceedings of the 53rd Annual Meeting of the Association\nfor Computational Linguistics and the 7th International Joint Conference on Natural Language\nProcessing, volume 2, pp. 174-179, 2015.\nHoria Margarit and Raghav Subramaniam. A batch-normalized recurrent network for sentiment\nclassification. In Advances in Neural Information Processing Systems, 2016.\nLili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. Discriminative neural sentence model-\ning by tree-based convolution. In Proceedings of the 2015 Conference on Empirical Methods in\nNatural Language Processing, pp. 2315-2325, Lisbon, Portugal, September 2015b. Association\nfor Computational Linguistics. URLihtto://aclweb.org/anthology/D15-1279\nr'sendsuren Munkhdalai and Hong Yu. Neural tree indexers for text understanding. arXiv preprin\narXiv: 1607.04492, 2016a.\nHamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song,\nand Rabab Ward. Deep sentence embedding using long short-term memory networks: Analysis\nand application to information retrieval. IEEE/ACM Transactions on Audio, Speech, and Lan-\nguage Processing, 24(4):694-707, 2016.\nTheano Development Team. Theano: A {Python} framework for fast computation of mathemati-\n\ncal expressions. arXiv e-prints, abs/1605.0, 2016. URL http: //arxiv.org/abs/1605.\n\n02688\nIvan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and\nlanguage. arXiv preprint arXiv:1511.06361, 2015.\nWenpeng Yin and Hinrich Schiitze. Convolutional neural network for paraphrase identification.\nIn Proceedings of the 2015 Conference of the North American Chapter of the Association for\nComputational Linguistics: Human Language Technologies, pp. 901-911, 2015.\nAnkur P. Parikh, Oscar Tackstrom, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention\nmodel for natural language inference. In Proceedings of EMNLP, 2016."}, {"section_index": "14", "section_name": "4 PRUNED MLP FOR STRUCTURED MATRIX SENTENCE EMBEDDING", "section_text": "As a side effect of having multiple vectors to represent a sentence, the matrix sentence embedding\nis usually several times larger than vector sentence embeddings. This results in needing more pa\nrameters in the subsequent fully connected layer, which connects every hidden units to every unit:\nin the matrix sentence embedding. Actually in the example shown in Figure[I] this fully connectec\nlayer takes around 90% percent of the parameters. See Table [4] In this appendix we are going t\nintroduce a weight pruning method which, by utilizing the 2D structure of matrix embedding, is abl\nto drastically reduce the number of parameters in the fully connected hidden layer.\nInheriting the notation used in the main paper, let the matrix embedding MW has a shape of r by u\nand let the fully connected hidden layer has b units. The normal fully connected hidden layer wil\nrequire each hidden unit to be connected to every unit in the matrix embedding, as shown in Figure\n\n[I] This ends up with r x wu x b parameters in total.\nHowever there are 2-D structures in the matrix embedding, which we should make use of. Each\nrow (m, in Figure[Tp in the matrix is computed from a weighted sum of LSTM hidden states, which\nmeans they share some similarities\nTo reflect these similarity in the fully connected layer, we split the hidden states into r equally sized\ngroups, with each group having p units. The i-th group is only fully connected to the i-th row in\nthe matrix representation. All connections that connects the i-th group hidden units to other rows\nof the matrix are pruned away. In this way, Simillarity between different rows of matrix embedding\nare reflected as symmetry of connecting type in the hidden layer. As a result, the hidden layer can\nbe interperated as also having a 2-D structute, with the number (r) and size (p) of groups as its\ntwo dimensions (The M\u201d in Figure 6p. When the total number of hidden units are the same (i.e.,\nFigure 6: Hidden layer with pruned weight connections. is the matrix sentence embedding, /\nand M\" are the structured hidden representation computed by pruned weights.\nTable 4: Model Size Comparison Before and After Pruning\nHidden layer Softmax Other Parts | Total | Accuracy\nYelp, Original, 6=3000 54M 15K 1.3M 55.3M | 64.21%\nYelp, Pruned, p=150, g=10_| 2.7M 52.5K 1.3M 41M | 63.86%\nAge, Original, b=4000 72M 20K 1.3M 73.2M | 80.45%\nAge, Pruned, p=25, q=20 822K 63.75K 1.3M 2.1M_ | 77.32%\nSNLI, Original, 6=4000 72M 12K 22.9M 95.0M | 84.43%\nSNLI, Pruned, p=300, g=10 | 5.6M 45K 22.9M 28.6M | 83.16%\nOn the other dimension, another form of similarity exists too. For each vector representation m, ir\nM, the j-th element m,; is a weighted sum of an LSTM hidden unit at different time steps. And fot\na certain j-th element in all vector representations, they are summed up from a same LSTM hidder\nunit. We can also reflect this similarity into the symmetry of weight connections by using the same\npruning method we did above. Thus we will have another 2-D structured hidden states sized u-by-q.\nnoted as M\u201d in Figure\nTable[4]takes the model we use for yelp dataset as a concrete example, and compared the number of\nparameters in each part of the model, both before and after pruning. We can see the above pruning\nmethod drastically reduces the model size. Note that the p and q in this structure can be adjusted\nfreely as hyperparameters. Also, we can continue the corresponding pruning process on top of M/\u201d\nand M\u201d over and over again, and end up with having a stack of structured hidden layers, just like\nstacking fully connected layers.\nThe subsequent softmax layer will be fully connected to both M,, and Mp, i.e., each unit in the\nsoftmax layer is connected to all units in ,, and M),. This is not a problem since the speed of\nsoftmax is largely dependent of the number of softmax units, which is not changed.In addition, for\n\napplications like sentiment analysis and textural entailment, the softmax layer is so tiny that only\ncontains several units."}, {"section_index": "15", "section_name": "B DETAILED STRUCTURE OF THE MODEL FOR SNLI DATASET", "section_text": "In Section|2]we tested our matrix sentence embedding model for the textual entailment task on the\nSNLI dataset. Different from the former two tasks, the textual entailment task consists of a pait\nof sentences as input. We propose to use a set of multiplicative interactions to combine the two\nPremise\n\nGated Encoder\n\nHypothesis\nFigure 7: Model structure used for textual entailment task\nExperimental results in the three datasets has shown that, this pruning mechanism lowers perfor-\nmances a bit, but still allows all three models to perform comparable or better than other models\ncompared in the paper.\nComparing the two matrix embeddings corresponds to the green dashed rectangle part in the figure,\nwhich computes a single matrix embedding (F\u2019.) as the factor of semantic relation between the two\nsentences. To represent the relation between M), and M,, F; can be connected to M), and M,\nthrough a three-way multiplicative interaction. In a three-way multiplicative interaction, the value\nof anyone of F;, M), and M, is a function of the product of the others. This type of connection is\noriginally introduced to extract relation between images . Since here we are just\ncomputing the factor of relations (F;.) from M;, and Mp, it corresponds to the encoder part in the\nFactored Gated Autoencoder in/Memisevic](2013). We call it Gated Encoder in Figure7]\nFirst we multiply each row in the matrix embedding by a different weight matrix. Repeating it\nover all rows, corresponds to a batched dot product between a 2-D matrix and a 3-D weight tensor.\nInheriting the name in (Memisevic||2013), we call the resulting matrix as factor. Doing the batched\ndot for both hypothesis embedding and premise embedding, we have F}, and F,,, respectively.\nFy, = batcheddot(Mp,, Wen.\nF, = batcheddot(Mp, Wp.\nHere W +\u00bb, and Wy, are the two weight tensors for hypothesis embedding and premise embedding\nThe factor of the relation (F;.) is just an element-wise product of F), and F, (the triangle in the\nmiddle of Figure[7):\nHere \u00a9 stands for element-wise product. After the F;. layer, we then use an MLP with softmax\noutput to classify the relation into different categlories.\nThe overall structure of our model for SNLI is dipicted in Figure[7] For both hypothesis and premise,\nwe extract their embeddings (M), and M, in the figure) independently, with a same LSTM and\nattention mechanism. The parameters of this part of model are shared (rectangles with dashed orange\nline in the figure).\nF), = batcheddot(My,, Wn)\nF,, = batcheddot(M,, Wp)\nFi. = F, \u00a9 Fy"}]
SJ8BZTjeg
[{"section_index": "0", "section_name": "UNSUPERVISED LEARNING USING GENERATIVE AD-\nVERSARIAL TRAINING AND CLUSTERING", "section_text": "Vittal Premachandran and Alan L. Yuille\n{vittalp, ayuillel}@jhu.edu\nIn this paper, we propose an unsupervised learning approach that makes use of two\ncomponents; a deep hierarchical feature extractor, and a more traditional cluster-\ning algorithm. We train the feature extractor in a purely unsupervised manner\nusing generative adversarial training and, in the process, study the strengths of\nlearning using a generative model as an adversary. We also show that adversar-\nial training as done in Generative Adversarial Networks (GANs) is not sufficient\nto automatically group data into categorical clusters. Instead, we use a more tra-\nditional grouping algorithm, k-means clustering, to cluster the features learned\nusing adversarial training. We experiment on three well-known datasets, CIFAR-\n10, CIFAR-100 and STL-10. The experiments show that the proposed approach\nperforms similarly to supervised learning approaches, and, might even be better\nin situations with small amounts of labeled training data and large amounts of\nunlabeled data."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Much of the recent work in machine learning and computer vision has focused on learning tech-\nniques for high-level tasks such as image classification (Krizhevsky et al. (2012 ;|Simonyan &\n(2014); (2015)). Many of the state-of-the-art models employ Convolutional\nNeural Networks (CNNs) to extract high-level feature representations by processing the input date\nusing multiple layers of convolutions, usually followed by some non-linear transform. CNNs have\nsuccessfully demonstrated to yield high-quality feature representations that produce state-of-the-art\nresults on a variety of tasks, not only on image classification (as mentioned above), but also or\nsemantic segmentation (Long et al.| (2015); (2016a)), boundary detection (Xie & Ti\n(2015); Premachandran et al ), and object detection (Girshick et al) (2014), among oth-\ners. These models are trained to produce high-quality features using backpropagation, usually by\npretraining on a large dataset (such as ImageNet) and then fine tuning on the relevant dataset. Un-\nfortunately, supervised learning suffers from certain challenges, especially, in terms of scalability\nsince it requires large amounts of labeled data. Labeling millions of images requires extensive effort\nand is time consuming. Moreover, supervised training with a predefined set of classes, limits the\ngeneralizability of the learned feature representations to novel classes.\nTo overcome the difficulties of labeling large amounts of training data, effort has gone into th\u00e9\ndevelopment of semi-supervised and unsupervised learning techniques. The goal of unsupservisec\nlearning techniques is to learn representations that are interpretable, easily transferable to nove\ntasks and novel object categories, and to disentangle the informative representation of the data fron\nnuisance variables (e.g. lighting, viewpoint, etc.) purely from unlabeled data. A common and widel)\nused method for unsupervised learning is to do clustering using k-Means. k-Means clustering is\nsimple method that groups input features into different clusters. Traditionally, this approach mainly\nused low-level features such as raw pixel intensities, HOG features, GIST features, SIFT features\netc. Although the performance of k-means on such features is usually poor, Wang et al.| usec\ndeep network features and employed k-means clustering to show strong results on grouping objec\nparts. But, the deep network that was used to extract the features was pre-trained on ImageNet using\nclass-label supervision (so, object knowledge was known). It would be a natural extension to see i\none can learn robust features using hierarchical feature learning in a purely unsupervised manner"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "However, since the objectives of unsupervised learning are not as concrete as the objectives of\nsupervised learning, optimizing deep hierarchical models using backpropagation becomes difficult.\nAttempts have been made to come up with \u201cpretext\u201d objective functions, which are usually driven\nby \u201ccommon sense\u201d requirements, to do unsupervised learning. Some examples of these objec-\ntives include minimizing the reconstruction error (Vincent et al.| ), training models to identify\nsurrogate classes (Dosovitskiy et al-](2014)), predicting spatial position of image patches\net al.|(2015);|Noroozi & Favaro|(2016)), and minimizing the distance in the representation space for\nobjects tracked over a time period in a video sequence (Wang & Guptal(2015))\nIn this paper, we learn a deep network using generative adversarial training. We use the feature:\nextracted from the discriminative component and fuse it with traditional unsupservised learning al\ngorithms like k-Means to improve their performance. We perform various experiments over man}\ndifferent datasets (CIFAR-10, CIFAR-100 and STL-10) and show that the representations that cat\nbe learned purely by unsupervised learning from an adversarial signal helps to learn meaningfu\nrepresentations of input data. Our experiments show that under situations with minimal amounts o\nsupervised training examples (and large amounts of unsupervised data), the representations learnec\nwith adversarial training perform competitively in comparison to supervised training on a simila\narchitecture. We now provide a brief summary of adversarial training employed by GAN and Info\nGAN.\nGenerative Adversarial Networks (Goodfellow et al.| (2014)) are composed of two components; th\ngenerator, G(.), and the discriminator, D(.). The generator maps a latent encoding to the data space\nwhile the discriminator distinguishes between samples generated by the generator and real data. Th\ngenerator is trained to fool the discriminator, while the discriminator is trained to not get fooled by\nthe generator.\nMore formally, given training data samples, x ~ Piata(x), where Pjata(x) is the true data dis\ntribution, the training of GANs proceeds by iterating between two-steps. In the first step, we fi\nthe parameters of the generative model, sample a latent code, z ~ Proise(z), and generate dati\nsamples, G(z), which is then used to train the discriminator, D(.), by updating its parameters to dis\n\ntinguish between G'(z) and x. The parameters of the discriminator can be updated by maximizin;\nthe expected log-likelihood,\nmin max V(G, D) = Ex~Pigea(x) Lo9(D(X))] + Ex~Proine (a) [log \u2014 D(G(z)))"}, {"section_index": "3", "section_name": "2.1 INFOGAN", "section_text": "The formulation described above uses a noise vector, z, which is used by the generator, G(.), to\nsynthesize data. This noise vector does not impose any constraints on what the generated data\nshould look like.\n\n) introduce a neat and simple idea to extend GANs into a feature\nidentifying system called InfoGAN. InfoGAN uses a structured latent code, c, which is input to\nRecently, much interest has gone into adversarial training. Generative Adversarial Networks\n\n(GANs) (Goodfellow et al.|(2014)) are of particular interest in this work. Progress in GANs have\n\nenabled significant improvement in the quality of images being generated in the past couple of years\n(Denton et al. (2015); Radford et al. (2015). While much of the recent effort has gone in the de-\nvelopment of better architectures and training procedures for modeling and training the generative\nnetwork, in this work, we systematically study the power of the representations learned by the gen-\nerator\u2019s adversary, i.e., the discriminative model.\nEx~ Pia, (x) log(D(x))] + Ezwp,,.,,..(z) [log(1 \u2014 D(G(z)))].\nLew Proise(z) log(l \u2014 D(G(z)))].\nthe generator, G(.), in addition to the noise vector, z. The code can either be a discrete code or <\ncontinuous code. In order to encourage the code to capture the inherent semantic structures in the\ntraining data, a new term is introduced to the objective function, which acts as a regularizer tha\nforces high mutual information between the latent code, c and the generated sample, G(z, c). Sinc\u00ab\nit is hard to maximize the mutual information, [(c; G(z, c)), directly (because one would need t\nknow the true distribution P(c|x)),|Chen et al. 20160) provide a variational lower bound, whicl\ncan be obtained when using a parametric auxiliary, Q(c]x), to approximate P(c|x). The variationa\nlower bound that is obtained is,\nThe InfoGAN objective is a regularized version of the original GAN objective (Eq. Bp. where the\nregularizer is the variational lower bound of mutual information,\n(2016b) share the parameters between Q(.) and D(.), which helps reduce the compute\ntional cost. We\n\ndo the same in all of our experiments.\nAs can be seen from the first term of Eq. Al the lower bound of the mutual information regularizer\nconveniently turns out to be a recognition model. If the optimization procedure converges success-\nfully, one can hope to have learned a latent code that ends up representing the most salient and\nstructured semantic features present in the data. The noise parameters, z, end up providing the\nstochasticity to the input that result in the production of samples with diversity.\n3 UNSUPERVISED LEARNING WITH ADVERSARIAL TRAINING AND\nK-MEANS++ CLUSTERING\nAs mentioned in Section[I] we are interested in learning representations of images in a purely unsu-\npervised manner. Both GAN, and InfoGAN provide a way to train the discriminative network using\nthe generated images as an adversary. InfoGAN, is particularly interesting since it has the ability\nto directly predict the different categories that might be present in the training database. While the\nqualitative results presented in {Chen et al.|(2016b) shows that the categories can be automatically\nidentified on the MNIST dataset, unfortunately, the same result does not seem to extend to more\ncomplicated and realistic datasets (CIFAR-10, CIFAR-100 and STL-10). We modified the InfoGAN\ncode released by the authors to enable support of the more realistic RGB data. We then trained the\nmodel on the above mentioned datasets to experiment if it could automatically identify the categor-\nical clusters present in the respective datasets. We found that while InfoGAN that we trained on\nthe above-mentioned datasets was successful in generating images that looked different for different\ncategorical codes, it was unable to identify the class-level grouping that is present in these datasets.\ninstead, we adopt a hybrid strategy for unsupervised learning. We first use the generative networ!\n1S an adversary to train the discriminative network until convergence. Upon convergence, we ex\nract features from the penultimate layer of the D(.) network and run a more traditional clusterin;\ngorithm, i.e., k-means++. Surprisingly, this simple strategy turns out to be much more effectiv\nit grouping data from similar categories than the approach of directly predicting the categorica\nsroups. Note that one can plug in more sophisticated unsupervised learning algorithms instead o\n<-means++. We use k-means++ to show that even a simple approach can produce reasonable results\nAnother motivation for using the features from the penultimate layers is that it facilitates featur\ntransferability to novel classes and tasks. It is common in the supervised learning approaches to firs\ntrain a deep network on ImageNet images using class-level supervision, then to perform net surger\nto chop off the top level weights, and using this truncated network as a feature extractor for furthe\nfine tuning on different datasets and tasks. Doing so does not prevent the model from being traine\nonly on the ultimate task that it might be used for. One can train the network on a \u201cpretext\u201d tas!\nand transfer the learned weights to other novel tasks. This is especially crucial for unsupervise:\nlearning since the pretext task that is used to train the models is almost always much different fron\nthe specific task that the model will ultimately be used for.\nL1(G, Q) = Een p(c),2~Proise(z) log Q(c|G(c, z))] + H(c).\nmin max Vin fogan(G, D,Q) = V(G, D) \u2014 ALi(G, Q).\nG.Q D\nGenerative Network\n\nBatch\n\nBatch\nNorm\n\nG(z,c)\n\ntanh\n\nDiscriminative Network fe\ndim=512\n\nLeaky Batch Leaky Batch Leaky Batch Leaky\nRelU Norm ReLU Norm ReLU Norm ReLU\n\nx vl\n\nQic|x)\n\n(x)\n\nT/F\nFigure 1: Figure shows the InfoGAN architecture that was used in all our experiments. Notice that\nthe input to G(.) is a combination of z and c. Also notice that most of the parameters are shared\nbetween the O(.) network and the D(.) network, thus improving the computational efficiency."}, {"section_index": "4", "section_name": "3.1 NETWORK ARCHITECTURE", "section_text": "Generator: Note that the generator has been slightly modified to accept the structured latent code\nc, in addition to the random noise, z. The first layer is a fully-connected (fc) layer\nwhich is then reshaped into a 2-D grid of spatial resolution s/16 x s/16, where s is the size o\nthe output image to be produced. Subsequent to this reshaping, the architecture has four layers o\ntransposed_convolution (sometimes referred to as deconvolution) with a stride of 2, eacl\nof which upsamples the input features to twice the spatial resolution. These layers are sandwiche\nby batch_norm and ReLU layers. Finally, we use a tanh non-linearity to map the features int\n[-1, 1].\nDiscriminator: The discriminator is a standard CNN with a series of convolutional layers followec\nby non-linearities. The architecture uses four convolutional layers sandwiched by batch_norn\nand leakyReLU layers. We don\u2019t use max_pooling to reduce the spatial resolution of the input\nInstead, we convolve the feature maps with a stride of two, which results in the output of eact\nconvolution layer to be half the spatial resolution of the input feature map. This base architecture\nis shared between D(.) and Q(.). On top of this shared network, we use an fc layer to extrac\nthe features, which are then used to predict the categorical distribution. Notice that most of the\ncomputational cost is shared between the D(.) and the Q(.) networks thereby making the entire\ntraining process to be computationally efficient.\n\\S mentioned previously, while InfoGAN has the ability to group data into multiple groups automat\ncally, there is no constraint to enforce that the groups need to correspond to the various object-leve\nategories that are present in the dataset. While this turned out to be true for the MNIST datase\nChen et al. (2016b)), we believe that it was possible because the variations in the strokes that pro.\nuce different digits correspond to the source of biggest variation in the dataset, which convenientl;\norresponds to the various digit categories, thereby enabling InfoGAN to act as a category recogni\nion model. In more realistic datasets, the sources of biggest variation need not (and, usually, do not\norrespond to variations in the object-level categories. Our experiments show this to be true. Wher\nve trained InfoGAN to automatically group the CIFAR-10 images into 10 categories, we found tha\nvhile InfoGAN was able to group the images into different groups, the groups did not corresponc\n) object category-level groupings. Figure|2]shows some example samples generated by the model\nEach row corresponds to a different category and each column in the row corresponds to a differen\nsample from that category (obtained by keeping c fixed and by varying z). We can see that whil\neach row look different from each other, it does not correspond to the CIFAR-10 categories.\nTherefore, we employ a hybrid approach to unsupervised clustering. We first train the discriminativ\nnetwork using either the vanilla GAN objective or the InfoGAN objective, until convergence. Upo\nconvergence, we extract features for each image in the training set, from the top of the share\nnetwork, labeled as (x) in Figure {I and do average_pooling across the spatial resolutior\nfor each feature channel. We then cluster these features using k-means++ into a discrete set of k\ncategories. We set k to be the number of object classes that are present in the respective datase\nThe cluster centers learned by k-means++ clustering act as the templates for the k categories th\u00e9\nare present in the dataset.\nDuring testing, we extract the feature representation of the test images by passing them through\nthe discriminative network trained using the generator as an adversary, do average_pooling\non (x), and compute the distance of the test feature vector to each of the centers learnt by k-\nmeans++ clustering during the training phase. The test image is assigned an index corresponding\nto the index of the closest center. Our experiments show that clustering on \u00a2(x) produces better\nresults than directly using the recognition model of InfoGAN. Note that while we use the simple k-\nmeans++ algorithm for clustering, it could be replaced by more sophisticated unsupervised learning\nalgorithms. We do not explore further down this route since the scope of this work is to study the\nstrength of the features learned by adversarial training.\nFigure 2: Figure shows samples generated from InfoGAN trained on the CIFAR-10 dataset wher\nthe system was encouraged to identify 10 categories. Each row corresponds to a different cluster\nidentified by InfoGAN. Each column corresponds to a different sample from that clusters. We\ncan see that while InfoGAN can identify clusters that are different from each other, they do not\ncorrespond to the CIFAR-10 categories. See Sec. (4.1 ]for quantitative results.\nAn advantage of the hybrid approach is that it now allows us to use a variety of different \u201cpretext\u201d\nobjectives. In other words one can decouple the training objective from the testing requirements. In\nfact, we experimented by encouraging InfoGAN to identify more groups in the training data than\nnumber of object-categories in the dataset. For example, we trained InfoGAN on CIFAR-10 dataset\nby encouraging the system to identify [10, 20, 30, 35, 40, 50 and 75] groups. Of course, these groups\ndo not correspond to category-level groupings. However, to our surprise, we found that when the\nfeatures obtained from InfoGANs trained on large number of categories were used for clustering,\nthey performed better at object categorization than the features obtained from an InfoGAN trained\non the same number of object categories as present in the dataset. Section [4] provides quantitative\nresults on these experiments."}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "We perform experiments on multiple datasets; CIFAR-10, CIFAR-100 and STL-1\u00a2] We use groun\nruth labels only for evaluation purposes and for training the supervised learning baseline. The trair\ning procedure is entirely unsupervised. We report results using two standard metrics that are use\nfor evaluating unsupervised learning algorithms; Adjusted RAND Index (ARI) and the Normalize\nMutual Information (NMI) score. We provide three baselines; (i) we report results using simp!\nfeatures such as pixel intensities, HOG and GIST, which we call low-level visual features, (ii) w\neport results on the features obtained using standard GAN training, (iii) as an upper bound, w\n\u2018eport results using supervised learning where we train the weights in a discriminator network wit\nhe same architecture using category-level labels that are provided by the datasets.\nIt is important to remember that we are interested in comparing the quality of the learned feature:\nthat can be used for transfer to novel images and not just the classification score on an pre-definec\nset of categories. The classification accuracy captures only how well a test image was correctly\nclassified. If incorrectly classified, it does not quantify how bad the mistake was. ARI, on the othe\nhand, is a better metric for evaluating the properties of the features because it measures not onl}\nhow accurately pairs of objects were correctly grouped together, but also takes into account hov\nmany pairs of data points were incorrectly grouped. Therefore, when comparing with the model tha\nwas trained using supervised learning, we ignore the top-level classification layer of that model, anc\nquantify the quality of the representations, i.e., the features extracted from the penultimate layer\nusing ARI after clustering on them.\nFigure 3: This figure shows all the 64 filters from the first layer of the discriminative network trainec\non CIFAR-10. The visualization on the left corresponds to the filters learned using adversaria\ntraining. The visualization on the right corresponds to the filters learned for the same architecture\nusing supervised learning. It is interesting to see that there the filters on the left have more higl\nfrequency components and the filters on the right are more smooth.\nBefore we go into the quantitative results, we visualize the filters of the first layer of the discrim-\ninative network and compare them across two different training procedures. Figure |3] shows the\nvisualization. On the left are the filters from the network that was trained using adversarial training\nOn the right are the filters from a network with the same architecture but trained using class-leve\nsupervision. Both these networks were trained using the CIFAR-10 dataset. We can see that while\nsome of the filters look similar to each other, many of them are quite different. It is clear that the\nfilters on the right are more smooth than the filters on the left. Recollect that filters on the left are\ntrained to fit both the real images and the generated images. When the generated images are not a:\nhigh-quality as the real images, the filters that D(.) learns might not be as regularized as the ones\nRPE ERT OF aE\nMe\nety TF\nBBE\nF-euUL ae\n\u201cWe have released the code that was used in all our experiments at https://github.com/VittalP/UnsupGA}\nCIFAR-10\n\n0 10 20 30 35 40 S50 75\n# Groups in InfoGAN\n\n=O ARI-32 \u2014eNMI-32 =P ARI-64\n\u2014*=NMI-64 \u2014\u00ae=ARI-32-InfoGAN \u2014\u00ae=NMI-32-InfoGAN\n\n\u2014\u00aeARI-64-InfoGAN =\u00ae=NMI-64-InfoGAN\n\n(a)\n\n0.6\n\n0.4\n\n0.2\n\nVisual\nFeatures\n\nCIFAR-10\n\nInfoGAN\n\nBARI BNMI\n\n(b)\n\nSupervised\nFigure 4: CIFAR-10: (a) Plots the performance of the grouping algorithm when using the features\nlearned from InfoGAN training when trained over multiple categories. Zero groups corresponds\nto vanilla GAN. -32 and -64 correspond to the output sizes of the generated images. -InfoGAN\ncorresponds to the results obtained with direct prediction using the recognition model in InfoGAN.\n(b) Note that InfoGAN features perform better than vanilla GAN features. However, supervised\nlearning outperforms unsupervised learning on this database.\nlearnt using only real data. We hypothesize that improving the quality of the generated images can\nhelp regularize the first layer filters in D(.). We leave this route of exploration for future work."}, {"section_index": "6", "section_name": "4.1 CIFAR-10", "section_text": "The CIFAR-10 consists of 50k training images and 10k testing images, of size 32 x 32, divided\namong 10 categories. We trained the model for two different image sizes; 32 x 32 and 64 x 64. We\ntrained InfoGAN with different numbers of categories {10, 20, 30, 35, 40, 50, 75}. Figure Halshows\na plot of the performance measures versus the number of groups InfoGAN was trained to identify\nWe can see from the figure that as we increase the number of categories, the performance of the\nmodel goes up into a certain point and drop after that. This indicates that there exists databases fot\nwhich grouping into more categories than present in the ground truth might help. We also plot the\nperformance of the InfoGAN model when used directly as a prediction model. We can see from\nthe plots that k-means++ clustering produces better results (ARI-32=0.097; NMI-32=0.18) thar\ndirect prediction (ARI-32-InfoGAN: 0.085; NMI-32-InfoGAN: 0.14). We label the direct predictior\nresults with a (-InfoGAN).\ncompares the performance when using different features. We can see that InfoGAD\nfeatures trained with 50 clusters beats the features learned using vanilla GAN by a small margin\nHowever, supervised training does much better (as one might have expected).\nIn these sets of experiments, we use the images from the CIFAR-100 database for training. This\ndatabase also contains 50k training examples and 10k test images, divided among 100 fine scale\ncategories and 20 coarse level categories. We test the performance on the coarse categories. As\nbefore, we experiment the InfoGAN training with multiple categories {10, 20, 35, 50}. While the\ntrend is not as noticeable as in the case of CIFAR-10, the best performance is obtained when we use\n50 categories. Also, as before, the k-means++ clustering of the features produces better performance\n(ARI=0.04) than the recognition model of InfoGAN (ARI=0.036).\n0.15\n\n0.1\n\n0.05\n\nCIFAR-100\n\nSanne\ne\n\nog ee\n\ni?) 10 20 35 50\n# Groups in InfoGAN\n\n\u2014eARI-32 \u2014*NMI-32\n\n\u2014*ARL-InfoGAN =*=NMI-InfoGAN\n\n(a)\n\n0.2\n0.15\n0.1\n0.05\n\nCIFAR-100\n= | ll al all\nVisual InfoGAN Supervised\nFeatures\n\nBARI SB NMI\n\n(b)\nFigure [5b] compares the performance when we use different different features. Notice that the fea:\ntures obtained by adversarial training are as competitive as the features obtained using supervisec\ntraining. We that this is because of two reasons; (i) CIFAR-100 coarse level categories are muct\nharder to distinguish than the CIFAR-10 categories, making it difficult for the supervised model tc\nlearn good features, (ii) the number of training examples per category in CIFAR-100 is lesser thar\nCIFAR-10 because we are training using the 20 coarse categories compared with 10 of CIFAR-10\nWe label the direct prediction results with a (-InfoGAN).\nFinally, we also perform experiments on the STL-10 dataset. This database consists of 5000 images\nfor training with labels, 100000 training images without labels, and 8000 images for testing. The\ndataset consists of 10 categories, and all the images are of size 96 x 96. This dataset brings out the\nadvantages of unsupervised learning algorithms. The database is more than two times bigger than\nCIFAR-10 and CIFAR-100 datasets in terms of the number of images and each image is 9 times the\nsize of the CIFAR images. Figure|6b]shows that the unsupervised learning with adversarial training\noutperforms the same models trained using supervised learning. From Figure [6a] we also notice\nthat the features learned using vanilla GAN does better than the features learned using InfoGAN.\nIncreasing the complexity of the datasets makes it difficult for InfoGAN to group the images in the\ndataset."}, {"section_index": "7", "section_name": "5 CONCLUSION", "section_text": "In this paper, we explore an unsupervised feature learning technique where the model is trained us\ning adversarial training from a generative network. We use a generative model to generate image\nthat act as an adversary to the discriminative network. We explore the standard GAN architectur\nand the InfoGAN architecture for training the discriminative model. We also show that direct predic\ntion using InfoGAN\u2019s recognition model does not always result in identifying object category-leve\ninformation. Instead, we fuse the features learned by adversarial training with a traditional unsu\npervised learning approach, k-means clustering, and show that this combination produces bette\nresults than direct prediction. We also show that, in situations where there are limited amounts o\nlabeled training data and large amounts of unlabeled data, adversarial training has the potential t\noutperform supervised learning.\nFigure 5: CIFAR-100: (a) # of groups used to train InfoGAN has less of an effect on CIFAR- 100 than\nit had on CIFAR-10. However, the performance of k-means++ clustering is still better than direct\nprediction using the recognition model of InfoGAN. Please see Fig. Aa] for labeling conventions.\n(b) InfoGAN features and GAN features perform similarly on this dataset. However, supervised\nlearning features are only slightly better than the unsupervised counterparts.\n0.25\n0.2\n0.15\n0o1\n0.05\n\nSTL-10\n\noo\na,\n\ni} 10 20 35 50 75\n# Groups in InfoGAN\n\n\u2014*ARI-96 \u2014\u00ae-NMI-96\n\nTo\n\n0.25\n\n0.2\n\n0.15\n\n01\n\n0.0!\n\na\n\n0\n\nSTL-10\nInfoGAN Supervised\n\nMARI BH NMI\n\nAA\nFigure 6: STL-10: (a) InfoGAN\u2019s performance drops with increase in the number of groups. (b\nVanilla GAN\u2019s features outperform InfoGAN-trained features. Also, notice that, with just 500(\nlabeled training images, supervised learning starts to reach its limits. However, our model make:\nuse of the additional 100000 unlabeled images and is able to learn representations that surpass the\nperformance of features learned using the supervised model."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille.\nDeeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and\nfully connected crfs. arXiv preprint arXiv: 1606.00915, 2016a.\nRoss Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for ac-\ncurate object detection and semantic segmentation. In Proceedings of the IEEE conference on\ncomputer vision and pattern recognition, pp. 580-587, 2014.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozait\nAaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor\nmation Processing Systems, pp. 2672\u20142680, 2014.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-\nnition. arXiv preprint arXiv: 1512.03385, 2015.\nEmily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a\nlaplacian pyramid of adversarial networks. In Advances in neural information processing systems,\n\npp. 1486-1494, 2015.\nAlexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discrimina-\ntive unsupervised feature learning with convolutional neural networks. In Advances in Neural\nInformation Processing Systems, pp. 766-774, 2014.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deey\nconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nJianyu Wang, Zhishuai Zhang, Vittal Premachandran, and Alan Yuille. Discovering internal repre\nsentations from object-cnns using population encoding. arXiv preprint arXiv: 1511.06855, 2015\nXiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos\nIn Proceedings of the IEEE International Conference on Computer Vision, pp. 2794-2802, 2015\nSaining Xie and Zhuowen Tu. Holistically-nested edge detection. In Proceedings of the IEEE\nInternational Conference on Computer Vision, pp. 1395-1403, 2015."}]
ryxB0Rtxx
[{"section_index": "0", "section_name": "[IDENTITY MATTERS IN DEEP LEARNING", "section_text": "IVEOLILZ TEALUL\n\nGoogle Brain\n\n1600 Amphitheatre Parkway\nMountain View, CA, 94043\n\nmamrtz.org\nAn emerging design principle in deep learning is that each layer of a deep artificial\nneural network should be able to easily express the identity transformation. This\nidea not only motivated various normalization techniques, such as batch normal-\nization, but was also key to the immense success of residual networks.\nIn this work, we put the principle of identity parameterization on a more solid\ntheoretical footing alongside further empirical progress. We first give a strikingly\nsimple proof that arbitrarily deep linear residual networks have no spurious local\noptima. The same result for feed-forward networks in their standard parameter-\nization is substantially more delicate. Second, we show that residual networks\nwith ReLu activations have universal finite-sample expressivity in the sense that\nthe network can represent any function of its sample provided that the model has\nmore parameters than the sample size.\nDirectly inspired by our theory, we experiment with a radically simple residual ar-\nchitecture consisting of only residual convolutional layers and ReLu activations,\nbut no batch normalization, dropout, or max pool. Our model improves signifi-\ncantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and\nImageNet classification benchmarks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "This shortcoming was observed and partially addressed by loffe & Szegedy (2015) through batch\nnormalization, i.e., layer-wise whitening of the input with a learned mean and covariance. But the\nidea remained somewhat implicit until residual networks (He et al. (2015); He et al. (2016)) explic-\nitly introduced a reparameterization of the convolutional layers such that when all trainable weights\nare 0, the layer represents the identity function. Formally, for an input x, each residual layer has the\nform x + h(a), rather than h(x). This simple reparameterization allows for much deeper architec-\ntures largely avoiding the problem of vanishing (or exploding) gradients. Residual networks, and\nsubsequent architectures that use the same parameterization, have since then consistently achieved\nstate-of-the-art results on various computer vision benchmarks such as CIFAR10 and ImageNet."}, {"section_index": "2", "section_name": "1.1 OUR CONTRIBUTIONS", "section_text": "In this work, we consider identity parameterizations from a theoretical perspective, while translatin:\nsome of our theoretical insight back into experiments. Loosely speaking, our first result underline\nhow identity parameterizations make optimization easier, while our second result shows the same 1\ntrue for representation.\nDepartment of Computer Sciene\nPrinceton University\n35 Olden Street, Princeton, 0854!"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Traditional convolutional neural networks for image classification, such as AlexNet (Krizhevsky\net al. (2012)), are parameterized in such a way that when all trainable weights are 0, a convolutional\nlayer represents the 0-mapping. Moreover, the weights are initialized symmetrically around 0. This\nstandard parameterization makes it non-trivial for a convolutional layer trained with stochastic gra-\ndient methods to preserve features that were already good. Put differently, such convolutional layers\ncannot easily converge to the identity transformation at training time.\nLinear residual networks. Since general non-linear neural networks, are beyond the reach of cur-\nrent theoretical methods in optimization, we consider the case of deep /inear networks as a simplified\nmodel. A linear network represents an arbitrary linear map as a sequence of matrices A, --- Az Aj.\nThe objective function is Elly \u2014 Ag---Aj2||?, where y = Rx for some unknown linear transfor-\nmation R and x is drawn from a distribution. Such linear networks have been studied actively in\nrecent years as a stepping stone toward the general non-linear case (see Section 1.2). Even though\nAy--- Aj is just a linear map, the optimization problem over the factored variables (Av,..., Aj) is\nnon-convex.\nIn analogy with residual networks, we will instead parameterize the objective function as\nmin Elly \u2014 (I+ Ae)--- (1+ Ai)a|l?.\nA Ag\nTo give some intuition, when the depth @ is large enough, we can hope that the target function /\nhas a factored representation in which each matrix A; has small norm. Any symmetric positiv\nsemidefinite matrix O can, for example, be written as a product O = O\u00a2---O 1, where each O; =\nO1/ is very close to the identity for large \u00a2 so that A; = O; \u2014 I has small spectral norm. We firs\nprove that an analogous claim is true for all linear transformations R. Specifically, we prove the\nfor every linear transformation R, there exists a global optimizer (A;,..., Ag) of (1.1) such that fo\nlarge enough depth @,\nHere, ||A|| denotes the spectral norm of A. The constant factor depends on the conditioning of F\nWe give the formal statement in Theorem 2.1. The theorem has the interesting consequence that a\nthe depth increases, smaller norm solutions exist and hence regularization may offset the increase i\nparameters.\nHaving established the existence of small norm solutions, our main result on linear residual networks\nshows that the objective function (1.1) is, in fact, easy to optimize when all matrices have sufficiently\nsmall norm. More formally, letting A = (Aj,...,Ag) and f(A) denote the objective function\nin (1.1), we can show that the gradients of vanish only when f(A) = 0 provided that max; || Ai|| <\nO(1/\u00a3). See Theorem 2.2. This result implies that linear residual networks have no critical points\nother than the global optimum. In contrast, for standard linear neural networks we only know, by\nwork of Kawaguchi (2016) that these networks don\u2019t have local optima except the global optimum.\nbut it doesn\u2019t rule out other critical points. In fact, setting A; = 0 will always lead to a bad critical\npoint in the standard parameterization.\nUniversal finite sample expressivity. Going back to non-linear residual networks with ReLU ac-\ntivations, we can ask: How expressive are deep neural networks that are solely based on residual\nlayers with ReLU activations? To answer this question, we give a very simple construction showing\nthat such residual networks have perfect finite sample expressivity. In other words, a residual net-\nwork with ReLU activations can easily express any functions of a sample of size n, provided that\nit has sufficiently more than n parameters. Note that this requirement is easily met in practice. On\nCIFAR 10 (n = 50000), for example, successful residual networks often have more than 10\u00b0 param-\neters. More formally, for a data set of size n with r classes, our construction requires O(n log n+r?)\nparameters. Theorem 3.2 gives the formal statement.\nEach residual layer in our construction is of the form \u00ab + VReLU(U), where U and V are linear\ntransformations. These layers are significantly simpler than standard residual layers, which typically\nhave two ReLU activations as well as two instances of batch normalization.\nThe power of all-convolutional residual networks. Directly inspired by the simplicity of ou!\nexpressivity result, we experiment with a very similar architecture on the CIFAR10, CIFAR100, anc\nImageNet data sets. Our architecture is merely a chain of convolutional residual layers each with <\nsingle ReLU activation, but without batch normalization, dropout, or max pooling as are commor\nin standard architectures. The last layer is a fixed random projection that is not trained. In line\nwith our theory, the convolutional weights are initialized near 0, using Gaussian noise mainly as\nsymmetry breaker. The only regularizer is standard weight decay (\u00a22-regularization) and there is nc\nneed for dropout. Despite its simplicity, our architecture reaches 6.38% top-1 classification erro\non the CIFAR10 benchmark (with standard data augmentation). This is competitive with the bes\nmax ||.A;|| < O(1/2).\n\n1<i<l\nSince the advent of residual networks (He et al. (2015); He et al. (2016)), most state-of-the-art net-\nworks for image classification have adopted a residual parameterization of the convolutional layers.\nFurther impressive improvements were reported by Huang et al. (2016) with a variant of residual\nnetworks, called dense nets. Rather than adding the original input to the output of a convolutional\nlayer, these networks preserve the original features directly by concatenation. In doing so, dense\nnets are also able to easily encode an identity embedding in a higher-dimensional space. It would be\ninteresting to see if our theoretical results also apply to this variant of residual networks.\nThere has been recent progress on understanding the optimization landscape of neural networks\nthough a comprehensive answer remains elusive. Experiments in Goodfellow et al. (2014\nand Dauphin et al. (2014) suggest that the training objectives have a limited number of bad loca\nminima with large function values. Work by Choromanska et al. (2015) draws an analogy betwee\nthe optimization landscape of neural nets and that of the spin glass model in physics (Auffinger et al\n(2013)). Soudry & Carmon (2016) showed that 2-layer neural networks have no bad differentiabli\nlocal minima, but they didn\u2019t prove that a good differentiable local minimum does exist. Baldi &\nHornik (1989) and Kawaguchi (2016) show that linear neural networks have no bad local minima\nIn contrast, we show that the optimization landscape of deep linear residual networks has no bac\ncritical point, which is a stronger and more desirable property. Our proof is also notably simple:\nillustrating the power of re-parametrization for optimization. Our results also indicate that deepe:\nnetworks may have more desirable optimization landscapes compared with shallower ones.\nConsider the problem of learning a linear transformation R: R? \u2014 R\u00a2 from noisy measurements\ny = Rx + \u20ac, where \u20ac \u20ac N(0, Idq) is a d-dimensional spherical Gaussian vector. Denoting by D the\ndistribution of the input data x, let \u00a9 = E,~p/xx \"| be its covariance matrix.\nho =2,\n\nhy =hj1+ Ajhj_1,\nYy = (Ida + Ag)...(Ud+ Ay)a.\nF(A, (2, y)) = ||9 \u2014 yll? = ||id + Ap)... (id + Ay)2 \u2014 Re - \u20ac|I?.\nThe first theorem of this section states that the objective function f has an optimal solution\nwith small ||\\-|\\|-norm, which is inversely proportional to the number of layers @. Thus, when\nresidual network reported in He et al. (2015), which achieved 6.43%. Moreover, it improves upon\nthe performance of the previous best all-convolutional network, 7.25%, achieved by Springenberg\net al. (2014). Unlike ours, this previous all-convolutional architecture additionally required dropout\nand a non-standard preprocessing (ZCA) of the entire data set. Our architecture also improves\nsignificantly upon Springenberg et al. (2014) on both Cifar100 and ImageNet.\nThere are, of course, many ways to solve this classical problem, but our goal is to gain insights\ninto the optimization landscape of neural nets, and in particular, residual networks. We therefore\nparameterize our learned model by a sequence of weight matrices A:..... A, \u20ac R\u00e9xd.\nIt is easy to see that this model can express any linear transformation R. We will use A as a shorthand\nfor all of the weight matrices, that is, the \u00a3 x d x d-dimensional tensor the contains Aj,...,A\u00a2 as\nslices. Our objective function is the maximum likelihood estimator,\nf(A) :=E[f(A, (2, y))] -\nRecall that || A;|| is the spectral norm of A;. We define the norm |||-||| for the tensor A as the maximum\nof the spectral norms of its slices,\n|| Al = max || Ail] -\n\n1<i<e\nthe architecture is deep, we can shoot for fairly small norm solutions. We define y\nmax{| log omax(R)|, | log omin(R)|}. Here omin(-), Omax(-) denote the least and largest singular\nvalues of R respectively.\nTheorem 2.1. Suppose \u00a3 > 3y. Then, there exists a global optimum solution A* of the populatior\nGiven the observation of Theorem 2.1, we restrict our attention to analyzing the landscape of f(-\nin the set of A with |||-|||-norm less than 7,\nB, _ {A e R\u2019x\u00a2x4 .\n: ||| All] < 7}.\nHere using Theorem 2.1, the radius 7 should be thought of as on the order of 1/\u00e9. Our main theorem\nin this section claims that there is no bad critical point in the domain B, for any 7 < 1. Recall that\na critical point has vanishing gradient.\nTheorem 2.2. For any T < 1, we have that any critical point A of the objective function f (-) inside\nthe domain B, must also be a global minimum.\nTheorem 2.2 suggests that it is sufficient for the optimizer to converge to critical points of the popu-\nlation risk, since all the critical points are also global minima.\nMoreover, in addition to Theorem 2.2, we also have that any A inside the domain B, satisfies that\nIV F(A) Ie > 40. \u2014 7)\u2019 to min (Z)? (f(A) \u2014 Copt) \u00ab\nEquation (2.3) says that the gradient has fairly large norm compared to the error, which guarantees\nconvergence of the gradient descent to a global minimum (Karimi et al. (2016)) if the iterates stay\ninside the domain B_, which is not guaranteed by Theorem 2.2 by itself.\nTowards proving Theorem 2.2, we start off with a simple claim that simplifies the population risk.\nWe also use ||-||= to denote the Frobenius norm of a matrix.\nClaim 2.3. In the setting of this section, we have,\nHere C is a constant that doesn\u2019t depend on A, and 1/2 denote the square root of &, that is, the\nunique symmetric matrix B that satisfies B2 = D.\nProof of Claim 2.3. Let tr(A) denotes the trace of the matrix A. Let F = (Id+ Ay)... (Id+Ai)\u2014R\nRecalling the definition of f(A) and using equation (2.2), we have\n|. A* lll < 20a + /37)?/\u20ac.\nHere yy should be thought of as a constant since if R is too large (or too small), we can scale the\ndata properly so that Omin(R) < 1 < Omax(R). Concretely, if Omax(R)/Omin(R) = \u00ab, then we\ncan scaling for the outputs properly so that omin(R) = 1/\\/K and omax(R) = \\/K. In this case, we\nhave 7 = log \\/k, which will remain a small constant for fairly large condition number \u00ab. We also\npoint out that we made no attempt to optimize the constant factors here in the analysis. The proof of\nTheorem 2.1 is rather involved and is deferred to Section A.\nHere Cop is the global minimal value of f(-) and ||Vf(A)||# denotes the euclidean norm! of the\n\u00e9 x dx d-dimensional tensor V f(A). Note that o,,;,(\u00a9) denote the minimum singular value of \u00a5.\nf(A) = ||(aa+ A,)... (Id + Ay) \u2014 Rete | +0.\nf(A) =E [Ex - \u20ac||7] (by equation (2.2))\n= E [||Exl|? + \\\\\u00e9l|? \u2014 2(Ex, \u20ac)]\n=E [w(Erx\" E\")] +E [Il\u00e9\"7] (since E [(Ez, 6)] = E [(Bx, E [\u00e9|2])] = 0)\n=t(\u00a3E[rr\"] E')+C (where C = E[xa\"])\n\nt(EXE')+C=|/ED'/? 2,4. (since E [xx'] =X)\n\nline tn HiT... /m me\nNext we compute the gradients of the objective function f(-) from straightforward matrix calculus\nWe defer the full proof to Section A.\nLemma 2.4. The gradients of f(-) can be written as,\noF\nOA;\n= (Id+ Ay)... (Id + Ai1)-\u2014R\n\nAId+ A/)...(d+ Aj, ,)EX(Id+ Ay ,)...(d+ Al)\nProof of Theorem 2.2. Using Lemma 2.4, we have.\nlaasl|, 2\\\\id+ Af)...(id+ A}, ,)EE(Id+ A} ,)...(d+A7)]],, (by Lemma.\n\n> 2] omin(Id + AF) - omin(=)||Elle (by Claim \u00a2\nj#i\n> 2(1 \u2014 7) omin (\u00a9) ||| - (since min(Id + A) > 1 \u2014||\nTherefore we complete the proof of equation (2.3). Finally, if A is a critical point, namely, V f(A\n0, then by equation (2.3) we have that f(A) = Copt. That is, A is a global minimum.\nIn this section we characterize the finite-sample expressivity of residual networks. We consider\na residual layers with a single ReLU activation and no batch normalization. The basic residual\nbuilding block is a function Ty,v,,(-) : R* \u2014 R* that is parameterized by two weight matrices\nUe R**. V \u00a9 REXE and a bias vector 5 \u20ac R*.\nA residual network is composed of a sequence of such residual blocks. In comparison with the ful\npre-activation architecture in He et al. (2016), we remove two batch normalization layers and oni\nReLU layer in each building block.\nWe assume the data has r labels, encoded as r standard basis vectors in R\", denoted by e1,...,e,.\nWe have n training examples (x), y\u201c),..., (a, y\u2122), where x \u20ac R? denotes the i-th data\nand y \u20ac {e1,...,e,} denotes the i-th label. Without loss of generality we assume the data are\n\nnormalized so that \u00a2 = 1. We also make the mild assumption that no two data points are very\nclose to each other.\nAssumption 3.1. We assume that for every 1 <i < j <n, we have ||a) \u2014 \u00ab||? > p for some\nabsolute constant p > 0.\nImages, for example, can always be imperceptibly perturbed in pixel space so as to satisfy this\nassumption for a small but constant p.\nUnder this mild assumption, we prove that residual networks have the power to express any possibl\nlabeling of the data as long as the number of parameters is a logarithmic factor larger than n.\nNow we are ready to prove Theorem 2.2. The key observation is that each matric A; has small\nnorm and cannot cancel the identity matrix. Therefore, the gradients in equation (2.5) is a product of\nnon-zero matrices, except for the error matrix F. Therefore, if the gradient vanishes, then the only\npossibility is that the matrix E vanishes, which in turns implies A is an optimal solution.\nel \u2014 ry \u2018omin(E)* (|B)\n\nIW FADE = > |#\n\n> 4\u20ac(1 \u2014 7) \u00e9 \u201c gpan(B)F(A) - (by the definition of E and Claim 2.\n\n> 4e(1 \u2014 rr) 6 \u2018omin(Z)?(f(A) \u2014 Cop)\n(since Copt = ming f(A) > C by Claim 2.\nTu.v.s(h) = VReLu(Uh + s) .\nTheorem 3.2. Suppose the training examples satisfy Assumption 3.1. Then, there exists a residual\nnetwork N (specified below) with O(nlogn + r?) parameters that perfectly expresses the training\ndata, i.e., for alli \u20ac {1,...,n}, the network N maps x to y.\nIt is common in practice that n > r?, as is for example the case for the Imagenet data set where\nn > 10\u00b0 and r = 1000.\nWe construct the following residual net using the building blocks of the form Ty,v,; as defined in\nequation (3.1). The network consists of \u00a2+ 1 hidden layers ho, ..., he, and the output is denoted by\ng \u20ac R\u2019. The first layer of weights matrices Ag maps the d-dimensional input to a k-dimensional hid-\nden variable ho. Then we apply @ layers of building block 7 with weight matrices Aj, B; \u20ac R***.\nFinally, we apply another layer to map the hidden variable hy to the label 7 in R\u2019. Mathematically,\nwe have\nNg = Agr,\n\nhy = hy + Ta;,B;,b) (Ry), VI \u20ac {1,-- 8)\n\ngGahe+ Then, Be+1,8e41 (he) -\nTowards constructing the network N of the form above that fits the data, we first take a random\n\nmatrix Ag \u20ac R**\u00a2 that maps all the data points x\u201c) to vectors AS) := Agx), Here we will use h?\nto denote the j-th layer of hidden variable of the i-th example. By Johnson-Lindenstrauss Theorem\n(Johnson & Lindenstrauss (1984), or see Wikipedia (2016)), with good probability, the resulting\n\nvectors ns remain to satisfy Assumption 3.1 (with slightly different scaling and larger constant\np), that is, any two vectors rn and nv ) are not very correlated.\n(i) \u2014 qG-\nthen v\n\u201cIn computer vision, typically r is less than 10\u00b0 and d is less than 10\u00b0 while n is larger than 10\u00b0\nWe note that here Ay,, \u20ac R**\" and By,; \u20ac R\u2019*\" so that the dimension is compatible. We assume\nthe number of labels r and the input dimension d are both smaller than n, which is safely true in\npractical applications.? The hyperparameter k will be chosen to be O(logn) and the number of\nlayers is chosen to be \u00a3 = [n/k]. Thus, the first layer has dk parameters, and each of the middle \u00a2\nbuilding blocks contains 2k? parameters and the final building block has kr +r? parameters. Hence,\nthe total number of parameters is O(kd + 0k? + rk +r?) = O(nlogn +12).\nThen we construct \u00a2 middle layers that maps ho\u2019 to hy for every i \u20ac {1,...,n}. These vectors\nne ) will clustered into r groups according to the labels, though they are in the R* instead of in R\u201d\n\nas desired. Concretely, we design this cluster centers by picking r random unit vectors qy,..., Ur\nin R*. We view them as the surrogate label vectors in dimension k (note that & is potentially much\nsmaller than r). In high dimensions (technically, if k > 4logr) random unit vectors q),...,q, are\n\npair-wise uncorrelated with inner product less than < 0.5. We associate the i-th example with the\ntarget surrogate label vector v\u2122 defined as follows.\nVi \u20ac {1,...,n}, nr? =u,\nWe briefly sketch the proof of the Lemma to provide intuitions, and defer the full proof to Section B.\nThe operation that each residual block applies to the hidden variable can be abstractly written as,\nwhere h corresponds to the hidden variable before the block and h corresponds to that after. We\nclaim that for an (almost) arbitrary sequence of vectors nO, seey nh\u201d), there exist Tu,v,s(-) such that\noperation (3.5) transforms k vectors of h\u2018)\u2019s to an arbitrary set of other k vectors that we can freely\nchoose, and maintain the value of the rest of n \u2014 k vectors. Concretely, for any subset S' of size k,\nand any desired vector vu) (i \u20ac S), there exist U,V, s such that\nv\u00a9 =h 4 Te y(h) Vie S\nnO =hO 4 Tu y.(h\u00ae) Vid S\nThis claim is formalized in Lemma B.1. We can use it repeatedly to construct @ layers of buildin;\nblocks, each of which transforms a subset of k vectors in {h, wee Ay to the correspondins\n\nvectors in {v\u201c),...,v(\u2122}, and maintains the values of the others. Recall that we have ( = [n/k\n\nlayers and therefore after \u00a2 layers, all the vectors no \u2019s are transformed to v\u2019s, which complet:\n\nthe proof sketch. L\nvectors In yv\\\"\u2019,...,u\u2018\"\u2019 +, and maintains the values of the others. Recall that we have \u00a2 = |n/k\nInspired by our theory, we experimented with all-convolutional residual networks on standard image\nclassification benchmarks."}, {"section_index": "4", "section_name": "4.1 CIFARIO AND CIFAR100", "section_text": "Our architectures for CIFAR10 and CIFAR100 are identical except for the final dimension corre\nsponding to the number of classes 10 and 100, respectively. In Table 1, we outline our architecture\nEach residual block has the form x + C2(ReLU(C\\2)), where C1, C2 are convolutions of the spec\nified dimension (kernel width, kernel height, number of input channels, number of output channels)\nThe second convolution in each block always has stride 1, while the first may have stride 2 wher\nindicated. In cases where transformation is not dimensionality-preserving, the original input x i\nadjusted using averaging pooling and padding as is standard in residual layers.\nWe trained our models with the Tensorflow framework, using a momentum optimizer with momen.\ntum 0.9, and batch size is 128. All convolutional weights are trained with weight decay 0.0001\nThe initial learning rate is 0.05, which drops by a factor 10 and 30000 and 50000 steps. The mode\nreaches peak performance at around 50k steps, which takes about 24h on a single NVIDIA Teslz\nK40 GPU. Our code can be easily derived from an open source implementation* by removing batct\nnormalization, adjusting the residual components and model architecture. An important departure\nfrom the code is that we initialize a residual convolutional layer of kernel size k x k and c outpu\nchannels using a random normal initializer of standard deviation o = 1/k?c, rather than 1/ky/\nused for standard convolutional layers. This substantially smaller weight initialization helped train.\ning, while not affecting representation.\nA notable difference from standard models is that the last layer is not trained, but simply a fixec\nrandom projection. On the one hand, this slightly improved test error (perhaps due to a regularizing\neffect). On the other hand, it means that the only trainable weights in our model are those of the\nconvolutions, making our architecture \u201call-convolutional\u2019\u201d.\n\u201chttps://github.com/tensorflow/models/tree/master/resnet\nh +> h+Tov(h).\nv) =h + Tyys(h) Vie S\nnD = bh + Tuy(h\u00ae) Vig S\nPrecision\n\nTable 1: Architecture for CIFAR10/100 (55 convolutions, 13.5M parameters)\n\n| variable dimensions | initial stride | description |\n3x3x3x 16 1 1 standard conv\n3x 3x 16 x 64 1 9 residual blocks\n3x 3x 64 x 128 2 9 residual blocks\n3 x 3 x 128 x 256 2 9 residual blocks\n- - 8 x 8 global average pool\n256 x num_classes - random projection (not trained)\n06 Cifarlo Precision 06 Cifar100 Precision\n\u2014 train \u2014 train\n0.5 \u2014 test 0.5 F \u2014 test }]\ne min e min\n0.4 0.44\nc\n2\n0.3 203+\nov\na\n0.2 0.2+\n0.1 O.1f+\n0.0 1 1 in rn in 0.0 1 1 1\n0 10 20 30 40 50 60 0 10 20 30 40 50 60\n\nSteps (x1000)\n\nSteps (x1000)\nTable 1: Architecture for CIFAR10/100 (55 convolutions, 13.5M parameters)\nFigure 1: Convergence plots of best model for CIFAR10 (left) and CIFAR (100) right. One step is a\ngradient update with batch size 128.\nAn interesting aspect of our model is that despite its massive size of 13.59 million trainable pa-\nrameters, the model does not seem to overfit too quickly even though the data set size is 50000. In\ncontrast, we found it difficult to train a model with batch normalization of this size without signifi-\ncant overfitting on CIFAR10.\nTable 2 summarizes the top-1 classification error of our models compared with a non-exhaustiv\nlist of previous works, restricted to the best previous all-convolutional result by Springenberg et al\n(2014), the first residual results He et al. (2015), and state-of-the-art results on CIFAR by Huan;\net al. (2016). All results are with standard data augmentation."}, {"section_index": "5", "section_name": "4.2 IMAGENET", "section_text": "The ImageNet ILSVRC 2012 data set has 1,281, 167 data points with 1000 classes. Each image\nis resized to 224 x 224 pixels with 3 channels. We experimented with an all-convolutional variant\nof the 34-layer network in He et al. (2015). The original model achieved 25.03% classification\nerror. Our derived model has 35.7M trainable parameters. We trained the model with a momentum\noptimizer (with momentum 0.9) and a learning rate schedule that decays by a factor of 0.94 every\ntwo epochs, starting from the initial learning rate 0.1. Training was distributed across 6 machines\nTable 2: Comparison of top-1 classification error on different benchmarks\nMethod CIFARIO | CIFARIO0O ] ImageNet ] remarks\n\nAll-CNN 7.25 32.39 41.2 all-convolutional, dropout, extra data processing\nOurs 6.38 24.64 35.29 all-convolutional\n\nResNet 6.43 25.16 19.38\n\nDenseNet 3.74 19.25 N/A\nupdating asynchronously. Each machine was equipped with 8 GPUs (NVIDIA Tesla K40) and used\nbatch size 256 split across the 8 GPUs so that each GPU updated with batches of size 32.\nIn contrast to the situation with CIFAR10 and CIFAR100, on ImageNet our all-convolutional model\nperformed significantly worse than its original counterpart. Specifically, we experienced a signifi-\ncant amount of underfitting suggesting that a larger model would likely perform better.\nDespite this issue, our model still reached 35.29% top-1 classification error on the test set (50000\ndata points), and 14.17% top-5 test error after 700, 000 steps (about one week of training). While\nno longer state-of-the-art, this performance is significantly better than the 40.7% reported by\nKrizhevsky et al. (2012), as well as the best all-convolutional architecture by Springenberg et al.\n(2014). We believe it is quite likely that a better learning rate schedule and hyperparameter settings\nof our model could substantially improve on the preliminary performance reported here."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Antonio Auffinger, G\u00e9rard Ben Arous, and Jif Cerny. Random matrices and complexity of spit\nglasses. Communications on Pure and Applied Mathematics, 66(2):165\u2014201. 2013.\nP. Baldi and K. Hornik. Neural networks and principal component analysis: Learning from exam.\nples without local minima. Neural Netw., 2(1):53-58, January 1989. ISSN 0893-6080. doi: 10\n1016/0893-6080(89)90014-2. URL http://dx.doi.org/10.1016/0893-6080 (89)\n90014-2.\nAnna Choromanska, Mikael Henaff, Michael Mathieu, G\u00e9rard Ben Arous, and Yann LeCun. Th\nloss surfaces of multilayer networks. In AISTATS, 2015.\nGao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks\nCoRR, abs/1608.06993, 2016. URL http: //arxiv.org/abs/1608.06993.\nOur theory underlines the importance of identity parameterizations when training deep artificial\nneural networks. An outstanding open problem is to extend our optimization result to the non-linear\ncase where each residual has a single ReLU activiation as in our expressivity result. We conjecture\nthat a result analogous to Theorem 2.2 is true for the general non-linear case. Unlike with the\nstandard parameterization, we see no fundamental obstacle for such a result.\nWe hope our theory and experiments together help simplify the state of deep learning by aiming to\nexplain its success with a few fundamental principles, rather than a multitude of tricks that need to\nbe delicately combined. We believe that much of the advances in image recognition can be achieved\nwith residual convolutional layers and ReLU activations alone. This could lead to extremely simple\n(albeit deep) architectures that match the state-of-the-art on all image classification benchmarks.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual\nnetworks. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The\nNetherlands, October 11-14, 2016, Proceedings, Part IV, pp. 630-645, 2016. doi: 10.1007/\n978-3-319-46493-0_38. URL http: //dx.doi.org/10.1007/978-3-319-46493-0_\n38.\nSergey loffe and Christian Szegedy. Batch normalization: Accelerating deep network training by\nreducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine\nLearning, ICML 2015, Lille, France, 6-11 July 2015, pp. 448-456, 2015. URL http://jmlr.\norg/proceedings/papers/v37/ioffel5.html.\n<. Kawaguchi. Deep Learning without Poor Local Minima. ArXiv e-prints, May 2016."}, {"section_index": "7", "section_name": ". MISSING PROOFS IN SECTION 2", "section_text": "In this section, we give the complete proofs for Theorem 2.1 and Lemma 2.4, which are omitted in\nSection 2."}, {"section_index": "8", "section_name": "A.1 PROOF OF THEOREM 2.1", "section_text": "It turns out the proof will be significantly easier if R is assumed to be a symmetric positive semidef.\ninite (PSD) matrix, or if we allow the variables to be complex matrices. Here we first give a proo!\nsketch for the first special case. The readers can skip it and jumps to the full proof below. We wil\nalso prove stronger results, namely, |||_A*||| < 3/2, for the special case.\nWe see that the network defined by A* reconstruct the transformation R, and therefore it\u2019s a global\nminimum of the population risk (formally see Claim 2.3 below). Next, we verify that each of the A?\nhas small spectral norm:\n|| A*|| = |[Id \u2014 U diag(z;/\")U')|| = ||U(d \u2014 diag(z;)'/\")U \"|| = ||Id \u2014 diag(z,)'/*|\n= max |z}/\u2018 1).\ni\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-\nlutional neural networks. In Advances in neural information processing systems, pp. 1097-1105,\n2012.\n\nD. Soudry and Y. Carmon. No bad local minima: Data independent training error guarantees for\nmultilayer neural networks. ArXiv e-prints, May 2016.\n\nJ. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for Simplicity: The All\nConvolutional Net. ArXiv e-prints, December 2014.\n(Id + Af) --- (Id + At) = (U diag(z;/)U\")* = U diag(z//\")'U (since U'U =1\n=UZU'=R.\n\u2014 1] = [ele *)/6 \u2014 1) < 3\\(log 2) /\u20ac| < 3y/\u20ac.\n\n(since |e* \u2014 1] < 3|a| for all |x| < 1)\nThen using equation (A.1) and the equation above, we have that ||| All| < max,||A*|| < 37/\u00a2, whict\ncompletes the proof for the special case.\nNext we give the formal full proof of Theorem 2.1.\nProof of Theorem 2.1. We assume the dimension d is an even number. The odd case has very simila:\nproof and is left to the readers. Let R = UKV' be its singular value decomposition, where U,V are\ntwo orthonormal matrices and Kis a diagonal matrix. Since U is a normal matrix (that is, U satisfies\nthat UU ' = U'U), by Claim C.1, we have that U can be block-diagnolaized by orthonormal matrix\n\nS into U = SDS~1, where D = diag(Dj,..., , Daz) is a real block diagonal matrix with eact\nblock D; being of size 2x 2.\nB,\\| = la - sws~*|| = || saa \u2014W)s\u2122|\n\\lId- WI]\n\nT(0) \u2014T(0; :\nield/2 ] ax, ||T'(0) \u2014 i/) | (since\n\n= max |sin(9;/q)| < 7/q.\nSimilarly, we can choose B/,..., B! with ||C;|| < 2/q so that V' = (Id+ B\u2019)... (Id + B\u2018).\n\u201cHere for notational convenience, p, q are not chosen to be integers. But rounding them to closest intege:\nwill change final bound of the norm by small constant factor.\nSince U is orthonormal, U has all its eigenvalues lying on the unit circle (in complex plane). Since D\nand U are unitarily similar to each other, D also has eigenvalues lying on the unit circle, and so does\neach of the block D;. This means that each D; is a 2 x 2 dimensional rotation matrix. Each rotation\n|B;|| = |Id- sws \u2014W)s~ \"||\n= |[Id-W|| (since S$ is unitary)\n= mes 70) \u2014 T(6;/a)|I (since W = diag(T'(0;/q)) is block diagonal)\n\n= max|sin(9;/q)| < 7/q.\n||\u2019 \u2014 Id]| < max |e'\u00b08**7/P \u2014 1| < 3max|log k; - 1/p| = 3y/p.\n(since |e* \u2014 1| < 3]a| for |a| < 1\nR=UKV' = (Id+ Ap)... (Id+ At)."}, {"section_index": "9", "section_name": "A.2 PROOF OF LEMMA 2.4", "section_text": "We compute the partial gradients by definition. Let A; \u20ac R\u00b0*\u00ae be an infinitesimal change to A;\nUsing Claim 2.3, consider the Taylor expansion of f(A,,..., Ag+ A,;,..., Ae)\nf(Ai,-..,A4e + Aj... Ae)\n\n(aa + Ap)+++(Id + Aj + Aj)... (Id 4 Ai) \u2014 Rs\n\n(aa + Ag) (Id + Ay) \u2014 R)EY? + (a4 Ae) Ay. (Id+ Ay)\" |\"\n\n= |[(ld + Ae) ++ (ld + Ar) = rye? |\n2(((Id + Ae) \u00ab+ (Id + Ar) \u2014 R)E*?, (Id + Ap) Aj... (Id + Ar)E*/?) + O(|A, ||?)\nf(A) + 2((Id + A/)... (d+ Aj, ,)ED(Id + Aj_y)... (Id + AT), Aj) + O((|AGI%) -\nIn this section, we provide the full proof of Theorem 3.2. We start with the following Lemma that\nconstructs a building block 7 that transform k vectors of an arbitrary sequence of n vectors to any\narbitrary set of vectors, and main the value of the others. For better abstraction we use a ,8\u00a9 to\ndenote the sequence of vectors.\na) for every 1 <i <n, we have 1\u2014p! < |lai||? < 1+ 9\u2019, and b) ifi F j and S contains at least one\nof i, j, then \\la\u00ae \u2014 BD || > 3p'. Let BM ,..., B) be an arbitrary sequence of vectors. Then, there\n\nLemma B.1. Let S$ C [n] be of size k. Suppose a,..., al\u201d) is a sequences of n vectors satisfying\n\nexists U,V \u20ac R***, s such that for every i \u20ac Ss, we have Tu.v,s(a) = B\u00a9 \u2014 a, and moreover,\nfor every i \u20ac [n]\\S we have Ty y..(a\u00ae) = 0.\nWe can see that the conclusion implies\nBo = q\u00ae +4 Tuvs(a) Vie S\na\u00ae = gQ\u00ae 4 Tu v.s(a) Vid S\nBo = q\u00ae +4 Tuvs(a) Vie S\na\u00ae = gQ\u00ae 4 Tu v.s(a) VigS\nProof of Lemma B.1. Without loss of generality, suppose S = {1,..., ,k}. We construct U,V, s as\n\nfollows. Let the i-th row of U be a for i \u20ac [k], and let s = ac \u2014 2p')-1 where 1 denotes the all\n1\n\n1\u2019s vector. Let the i-column of V be \u2014\u2014 >} (8 \u2014 a) for i \u20ac [k].\nNext we verify that the correctness of the construction. We first consider 1 < i < k. We have that\nUa) is aa vector with i-th coordinate equal to ||a\u201c) |? > 1 \u2014 p\u2019. The j-th coordinate of Va is\nequal to (a). a). which can be upperbounded using the assumption of the Lemma by\n(a, a) = 5 (la? + a?) = Ja \u2014 a? < 14 p= 39! < 1-291,\n\nNile\n\u201cHA coy At Bipeces Aa)\n\n(aa + Ap)+++(Id + Aj + Aj)... (Id 4 Ai) \u2014 Rs\n\n(aa + Ap)+++(Id + Ay) \u2014 RD\"? + (Id 4 Ae) Ay. (Id+ Ay)\" |\"\n= |[(ld + Ae) + (d+ Ar) \u2014 ya +\n2(((Id + Ae) +: (Id ++ Ay) \u2014 R)E*/?, (Id + Ap)-- .. (Id + Ay)E1/?) + O(AGI|#)\n\n= f(A) +2((Id+ A7).. \u201c(as AL,)pn(e At.) \u201c(d+ AS) Aj) + O(AG\\I#) -\n\n(Id + Aj, ,)ESD(Id + Aj_,)... (Id +\nC\n\ny definition, this means that the pe = 2(Id+ A})..\nj\n\n+r\n\n1):\nFinally, consider n > i > k. Then similarly to the computation in equation (B.1), Va is a vector\nwith all coordinates less than 1 \u2014 2p\u2019. Therefore Ua + b is a vector with negative entries. Hence\nwe have ReLu(Ua + b) = 0, which implies VReLu(Ua\u2122 + b) = 0.\nNow we are ready to state the formal version of Lemma 3.3.\nLemma B.2. Suppose a sequence of n vectors 2)... 2 satisfies a relaxed version of Assump-\ntion 3.1: a) for every i, 1 \u2014 p! < |\\z||? < 14+ 9! b) for every i F j, we have ||z\u00a9 \u2014 2? > p\u2019;\nLet v),...,v\\\u2122 be defined above. Then there exists weigh matrices (A,, By),..., (Ap, By), such\n\nthat civen Wi bh) \u2014 +) we have\nVi \u20ac {1,...,n}, n\\? =v),\nINOW We construct the olner layers inductively. We wilt construct the layers such Mat the nidder\nvariable at layer 7 satisfies a\u201d = v) for every 1 < i < jk, and a\u201d = 2 for every n > i >\njk. Assume that we have constructed the first 7 layer and next we use Lemma B.1 to construc\nthe j + 1 layer. Then we argue that the choice of a{) = v\\),..., aGk) = yGk), g(Gkt) \u2014\nZGFD al) = 2) and S = {jk +1,...,(7 + 1)k} satisfies the assumption of Lemma B.1\nIndeed, because q;\u2019s are chosen uniformly randomly, we have w.h.p for every s and i, (qs, 2) <\n1\u2014p'. Thus, since v \u20ac {q1,--., qr}, we have that v also doesn\u2019t correlate with any of the 2\u201c)\nThen we apply Lemma B.1 and conclude that there exists Aj, = U, Bj41 = V,bj41 = s such tha\nTayi bjersbjar(V) = 0 for i < jk, Tay. byaasdjur (2) = 0 \u2014 2\u00a9 for jk <i < (\u00a7 +k, anc\nTas isbjs1sbj1 (2) = 0 forn >i > (j + 1)k. These imply that\nSi SRO FTAs asbyerdjy(\u00a5) =0 WI <i< jk\nPi = MO $ Taye rby pity (2) = 09 Vik +1<i< G+1)k\n\n(\n\nhays no + Taya jbjarsbjer 2) = 2 VG+t)Dk<i<n\n\nh\nDiy = + Tarja beer dja (2) = 2 VWG+tDR<isn\nNow we ready to prove Theorem 3.2, following the general plan sketched in Section 3.\nrn =v,\nWe will use Lemma B.1 repeatedly to construct building blocks T4,,5,,s;(-), and thus prove\nLemma B.2. Each building block T,,2,,s, (+) takes a subset of k vectors among {2\"),..., 2()}\nand convert them to v)\u2019s, while maintaining all other vectors as fixed. Since they are totally n/k\nlayers, we finally maps all the z\u201c\u2019s to the target vectors v\\)\u2019s.\nhy\n\nG41\n\n=h\u201d\n\n+ Tay 41 sbje1ybj41 (2)\nTherefore we constructed the j + 1 layers that meets the inductive hypothesis for layer j + 1.\n\nTherefore, by induction we get all the layers, and the last layer satisfies that ni = v for every\nexample 7.\nProof of Theorem 3.2. We use formalize the intuition discussed below Theorem 3.2. First, take\nk = c(log n)/p? for sufficiently large absolute constant c (for example, c = 10 works), by Johnson-\nLindenstrauss Theorem (Johnson & Lindenstrauss (1984), or see Wikipedia (2016)) we have that\nwhen Apo is a random matrix with standard normal entires, with high probability, all the pairwise\ndistance between the the set of vectors {0,2),..., \u00ab\")} are preserved up to 1 + p/3 factor. That\nis, we have that for every i, 1\u2014p/3 < pe < 1+p/3, and for every i # j, || Aga \u2014Apx|| >\np(l \u2014 p/3) > 2p/3. Let 2 = Agx\u2122 and p! = p/3. Then we have z\u201c\u2019s satisfy the condition\n( Lemam B.2. We pick r random vectors qi,...,qr in R\u00ae. Let v\u2122,.. lt be defined as in\nequation (3.2). Then by Lemma B.2, we can construct matrices (Aj, Bi). ., (Ag, Be) such that\n\n(i) @) Um AY\nfor every j \u20ac {1,...,r}..\nIn this section, we state two folklore linear algebra statements. The following Claim should be\nknown, but we can\u2019t find it in the literature. We provide the proof here for completeness.\nwhere D is a real block diagonal matrix that consists of blocks with size at most 2 x 2. Moreover, if\ndis even, then D consists of blocks with size exactly 2 x 2.\nProof. Since U is a normal matrix, it is unitarily diagonalizable (see Weisstein (2016) for\nbackgrounds). Therefore, there exists unitary matrix V in C\u00a2*\u00a2 and diagonal matrix in\nC4*4 such that U has eigen-decomposition U = VAV*. Since U itself is a real ma-\ntrix, we have that the eigenvalues (the diagonal entries of A) come as conjugate pairs,\nand so do the eigenvectors (which are the columns of V). That is, we can group the\ncolumns of V into pairs (v;,01),...,(Us,Us),Us41,-.+, Ut, and let the corresponding eigenval-\nues be A1,A7,-..,A),,As,Astis+--At- Here Ax41,.--,A2 \u20ac R. Then we get that U =\nSe 2R(viAwt) + an virAv, . Let Q; = R(viA;v7), then we have that Q; is a real matrix of\nrank-2. Let S; \u20ac R\u00a2*? be a orthonormal basis of the column span of Q; and then we have that Q; can\nbe written as Q; = S;D;S;) where D; is a 2 x 2 matrix. Finally, let S = [S1,..., 5s, Us4i,---, Ue),\nand D = diag(Dj,...,Ds,As41,.-.,Az) we complete the proof.\nProof. Since Omin(A)? is the smallest eigenvalue of A' A, we have that\nTal\nB'A'AB>B' -omin(A)*Id-B.\nTherefore, it follows that\nTaking square root of both sides completes the proof.\nThe following Claim is used in the proof of Theorem 2.2. We provide a proof here for completeness.\n|ABlle > omin(A)||Blle -\n= Omin(A)*tr(B! B) = omin(A)*||BIlF -"}]
B1ckMDqlg
[{"section_index": "0", "section_name": "OUTRAGEOUSLY LARGE NEURAL NETWORKS:\nTHE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER", "section_text": "Noam Shazeer', Azalia Mirhoseini*', Krzysztof Maziarz**, Andy Davis', Quoc Le!, Geoffre'\nHinton! and Jeff Dean!\nThe capacity of a neural network to absorb information is limited by its number ot\nparameters. Conditional computation, where parts of the network are active on <\nper-example basis, has been proposed in theory as a way of dramatically increas-\ning model capacity without a proportional increase in computation. In practice\nhowever, there are significant algorithmic and performance challenges. In this\nwork, we address these challenges and finally realize the promise of conditional\ncomputation, achieving greater than 1000x improvements in model capacity witk\nonly minor losses in computational efficiency on modern GPU clusters. We in-\ntroduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up tc\nthousands of feed-forward sub-networks. A trainable gating network determines\na sparse combination of these experts to use for each example. We apply the MoE\nto the tasks of language modeling and machine translation, where model capacity\nis critical for absorbing the vast quantities of knowledge available in the training\ncorpora. We present model architectures in which a MoE with up to 137 billior\nparameters is applied convolutionally between stacked LSTM layers. On large\nlanguage modeling and machine translation benchmarks, these models achieve\nsignificantly better results than state-of-the-art at lower computational cost.\nExploiting scale in both training data and model size has been central to the success of deep learn-\ning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural\nnetworks can give much better prediction accuracy. This has been shown in domains such as text\n(Sutskever et al., 2014; Bahdanau et al., 2014; Jozefowicz et al., 2016; Wu et al., 2016), images\n(Krizhevsky et al., 2012; Le et al., 2012), and audio (Hinton et al., 2012; Amodei et al., 2015). For\ntypical deep learning models, where the entire model is activated for every example, this leads to\na roughly quadratic blow-up in training costs, as both the model size and the number of training\nexamples increase. Unfortunately, the advances in computing power and distributed computation\nfall short of meeting such demand.\n'Google Brain, {noam,azalia,andydavis,qvl,geoffhinton, jeff} @ google.com\n2Jagiellonian University, Cracow, krzysztof.maziarz @ student.uj.edu.pl"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Various forms of conditional computation have been proposed as a way to increase model capacity\nwithout a proportional increase in computational costs (Davis & Arel, 2013; Bengio et al., 2013;\nEigen et al., 2013; Ludovic Denoyer, 2014; Cho & Bengio, 2014; Bengio et al., 2015; Almahairi\net al., 2015). In these schemes, large parts of a network are active or inactive on a per-example\nbasis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic.\nVarious forms of reinforcement learning and back-propagation are proposed for trarining the gating\ndecisions.\nA\nMoE MoE\nlayer layer\n\nMoE layer\n\nGe,\n\nGX) _-1\n\nExpert 1\n\nExpert 3\n\nExpert n\nFigure 1: A Mixture of Experts (MoE) layer embedded within a recurrent language model. In thi:\ncase, the sparse gating function selects two experts to perform computations. Their outputs ar\u00a2\nmodulated by the outputs of the gating network.\nWhile these ideas are promising in theory, no work to date has yet demonstrated massive improve\nnents in model capacity, training time, or model quality. We blame this on a combination of th\n\u2018ollowing challenges:"}, {"section_index": "2", "section_name": "[1.2 OUR APPROACH: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER", "section_text": "Our approach to conditional computation is to introduce a new type of general purpose neural net-\nwork component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a num-\nber of experts, each a simple feed-forward neural network, and a trainable gating network which\nselects a sparse combination of the experts to process each input (see Figure 1). All parts of the\nnetwork are trained jointly by back-propagation.\nModern computing devices, especially GPUs, are much faster at arithmetic than at branch-\ning. Most of the works above recognize this and propose turning on/off large chunks of the\nnetwork with each gating decision.\n\nLarge batch sizes are critical for performance, as they amortize the costs of parameter trans-\nfers and updates. Conditional computation reduces the batch sizes for the conditionally\nactive chunks of the network.\n\nNetwork bandwidth can be a bottleneck. A cluster of GPUs may have computational power\nthousands of times greater than the aggregate inter-device network bandwidth. To be com-\nputationally efficient, the relative computational versus network demands of an algorithm\nmust exceed this ratio. Embedding layers, which can be seen as a form of conditional com-\nputation, are handicapped by this very problem. Since the embeddings generally need to\nbe sent across the network, the number of (example, parameter) interactions is limited by\nnetwork bandwidth instead of computational capacity.\n\nDepending on the scheme, loss terms may be necessary to achieve the desired level of\nsparsity per-chunk and/or per example. Bengio et al. (2015) use three such terms. These\nissues can affect both model quality and load-balancing.\n\nModel capacity is most critical for very large data sets. The existing literature on condi-\ntional computation deals with relatively small image recognition data sets consisting of up\nto 600,000 images. It is hard to imagine that the labels of these images provide a sufficient\nsignal to adequately train a model with millions, let alone billions of parameters.\nIn this work, we for the first time address all of the above challenges and finally realize the promise\nof conditional computation. We obtain greater than 1000x improvements in model capacity with\nonly minor losses in computational efficiency and significantly advance the state-of-the-art results\non public language modeling and translation data sets.\nWhile the introduced technique is generic, in this paper we focus on language modeling and machine\ntranslation tasks, which are known to benefit from very large models. In particular, we apply a MoE\nconvolutionally between stacked LSTM layers (Hochreiter & Schmidhuber, 1997), as in Figure 1.\nThe MoE is called once for each position in the text, selecting a potentially different combination\nof experts at each position. The different experts tend to become highly specialized based on syntax\nand semantics (see Appendix E Table 9). On both language modeling and machine translation\nbenchmarks, we improve on best published results at a fraction of the computational cost.\nSince its introduction more than two decades ago (Jacobs et al., 1991; Jordan & Jacobs, 1994),\nthe mixture-of-experts approach has been the subject of much research. Different types of expert\narchitectures hae been proposed such as SVMs (Collobert et al., 2002), Gaussian Processes (Tresp,\n2001; Theis & Bethge, 2015; Deisenroth & Ng, 2015), Dirichlet Processes (Shahbaba & Neal, 2009),\nand deep networks. Other work has focused on different expert configurations such as a hierarchical\nstructure (Yao et al., 2009), infinite numbers of experts (Rasmussen & Ghahramani, 2002), and\nadding experts sequentially (Aljundi et al., 2016). Garmash & Monz (2016) suggest an ensemble\nmodel in the format of mixture of experts for machine translation. The gating network is trained on\na pre-trained ensemble NMT model.\nThe works above concern top-level mixtures of experts. The mixture of experts is the whole model.\nEigen et al. (2013) introduce the idea of using multiple MoEs with their own gating networks as\nparts of a deep model. It is intuitive that the latter approach is more powerful, since complex prob-\nlems may contain many sub-problems each requiring different experts. They also allude in their\nconclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational\ncomputation.\nOur work builds on this use of MoEs as a general purpose neural network component. While Eigen\net al. (2013) uses two stacked MoEs allowing for two sets of gating decisions, our convolutional\napplication of the MoE allows for different gating decisions at each position in the text. We also\nrealize sparse gating and demonstrate its use as a practical way to massively increase model capacity.\nThe Mixture-of-Experts (MoE) layer consists of a set of n \u201cexpert networks\" E),--- ,\u00a3,, and a\n\u201cgating network\" G whose output is a sparse n-dimensional vector. Figure 1 shows an overview\nof the MoE module. The experts are themselves neural networks, each with their own parameters.\nAlthough in principle we only require that the experts accept the same sized inputs and produce the\nsame-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where\nthe models are feed-forward networks with identical architectures, but with separate parameters.\nLet us denote by G(x) and \u00a3;(x) the output of the gating network and the output of the i-th expert\nnetwork for a given input x. The output y of the MoE module can be written as follows:\ny= >0G(a);,E\\(x)\ni=l\nWe save computation based on the sparsity of the output of G(a). Wherever G(a); = 0, we need not\ncompute E;(2). In our experiments, we have up to thousands of experts, but only need to evaluate\na handful of them for every example. If the number of experts is very large, we can reduce the\nbranching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating\nnetwork chooses a sparse weighted combination of \u201cexperts\u201d, each of which is itself a secondary\nmixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We\nprovide more details on hierarchical MoEs in Appendix B.\nOur implementation is related to other models of conditional computation. A MoE whose experts are\nsimple weight matrices is similar to the parameterized weight matrix proposed in (Cho & Bengio,\n2014). A MoE whose experts have one hidden layer is similar to the block-wise dropout described\nin (Bengio et al., 2015), where the dropped-out layer is sandwiched between fully-activated layers."}, {"section_index": "3", "section_name": "2.1 GATING NETWORK", "section_text": "Softmax Gating: A simple choice of non-sparse gating function (Jordan & Jacobs, 1994) is tc\nmultiply the input by a trainable weight matrix W, and then apply the Softmaz function.\nG,(x) = Softmax(a - W,)\nNoisy Top-K Gating: We add two components to the Softmax gating network: sparsity and noise\nBefore taking the softmax function, we add tunable Gaussian noise, then keep only the top k values\nsetting the rest to \u2014co (which causes the corresponding gate values to equal 0). The sparsity serve\nto save computation, as described above. While this form of sparsity creates some theoreticall\nscary discontinuities in the output of gating function, we have not yet observed this to be a problen\nin practice. The noise term helps with load balancing, as will be discussed in Appendix A. Th\namount of noise per component is controlled by a second trainable weight matrix W,,,;<..\nG(x) = Softmax(KeepTopK (H(z), k))\nH (x); = (x: W,)i + StandardNormal() - Softplus((a - Wnoise)i)\nUi if v; is in the top k elements of\n\n\u201ccepTopK (v, k)i = {oc otherwise.\nTraining the Gating Network We train the gating network by simple back-propagation, along\nwith the rest of the model. If we choose k > 1, the gate values for the top k experts have nonzero\nderivatives with respect to the weights of the gating network. This type of occasionally-sensitive\nbehavior is described in (Bengio et al., 2013) with respect to noisy rectifiers. Gradients also back-\npropagate through the gating network to its inputs. Our method differs here from (Bengio et al.\n2015) who use boolean gates and a REINFORCE-style approach to train the gating network.\nOn modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as\nto amortize the overhead of parameter loads and updates. If the gating network chooses k out of\nn experts for each example, then for a batch of b examples, each expert receives a much smallet\nbatch of approximately ab < b examples. This causes a naive MoE implementation to become\nvery inefficient as the number of experts increases. The solution to this shrinking batch problem is\nto make the original batch size as large as possible. However, batch size tends to be limited by the\nmemory necessary to store activations between the forwards and backwards passes. We propose the\nfollowing techniques for increasing the batch size:\nMixing Data Parallelism and Model Parallelism: In a conventional distributed training setting\nmultiple copies of the model on different devices asynchronously process distinct batches of data\nand parameters are synchronized through a set of parameter servers. In our technique, these differen\nbatches run synchronously so that they can be combined for the MoE layer. We distribute the\nstandard layers of the model and the gating network according to conventional data-parallel schemes\nbut keep only one shared copy of each expert. Each expert in the MoE layer receives a combinec\nbatch consisting of the relevant examples from all of the data-parallel input batches. The same se\nof devices function as data-parallel replicas (for the standard layers and the gating networks) anc\nas model-parallel shards (each hosting a subset of the experts). If the model is distributed over \u00ab\ndevices, and each device processes a batch of size b, each expert receives a batch of approximate];\n\u2018bd examples. Thus, we achieve a factor of d improvement in expert batch size.\nIn the case of a hierarchical MoE (Section B), the primary gating network employs data parallelism,\nand the secondary MoEs employ model parallelism. Each secondary MoE resides on one device.\nThis technique allows us to increase the number of experts (and hence the number of parameters) by\nproportionally increasing the number of devices in the training cluster. The total batch size increases,\nkeeping the batch size per expert constant. The memory and bandwidth requirements per device also\nremain constant, as do the step times, as does the amount of time necessary to process a number of\ntraining examples equal to the number of parameters in the model. It is our goal to train a trillion-\nparameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing\nof this paper, but it should be possible by adding more hardware.\nTaking Advantage of Convolutionality: In our language models, we apply the same MoE to each\ntime step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE\nto all the time steps together as one big batch. Doing so increases the size of the input batch to the\nMoE layer by a factor of the number of unrolled time steps.\nIncreasing Batch Size for a Recurrent MoE: We suspect that even more powerful models may\ninvolve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN\ncould be replaced by a MoE. Sadly, such models break the convolutional trick from the last para.\ngraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous\ntimestep. Gruslys et al. (2016) describe a technique for drastically reducing the number of storec\nactivations in an unrolled RNN, at the cost of recomputing forward activations. This would allow\nfor a large increase in batch size."}, {"section_index": "4", "section_name": "3.2 NETWORK BANDWIDTH", "section_text": "Another major performance concern in distributed computing is network bandwidth. Since the ex:\nperts are stationary (see above) and the number of gating parameters is small, most of the communi\ncation involves sending the inputs and outputs of the experts across the network. To maintain com\nputational efficiency, the ratio of an expert\u2019s computation to the size of its input and output must ex:\nceed the ratio of computational to network capacity of the computing device. For GPUs, this may b\u00ab\nthousands to one. In our experiments, we use experts with one hidden layer containing thousands o:\nRELU-activated units. Since the weight matrices in the expert have sizes input_size x hidden_siz\u00a2\nand hidden_size x output_size, the ratio of computation to input and output is equal to the size o\nthe hidden layer. Conveniently, we can increase computational efficiency simply by using a large:\nhidden layer, or more hidden layers."}, {"section_index": "5", "section_name": "4 BALANCING EXPERT UTILIZATION", "section_text": "Importance(X) = Ss G(x)\n\nrex\n\"Bengio et al. (2015) also include two additional losses. One controls per-example sparsity, which we dc\nnot need since it is enforced by the fixed value of k. A third loss encourages diversity of gate values. In ow\nexperiments, we find that the gate values naturally diversify as the experts specialize (in a virtuous cycle), anc\nwe do not need to enforce diversity of gate values.\nWe have observed that the gating network tends to converge to a state where it always produces\nlarge weights for the same few experts. This imbalance is self-reinforcing, as the favored experts\nare trained more rapidly and thus are selected even more by the gating network. Eigen et al. (2013)\ndescribe the same phenomenon, and use a hard constraint at the beginning of training to avoid this\nlocal minimum. Bengio et al. (2015) include a soft constraint on the batch-wise average of each\nsate.!\nWe take a soft constraint approach. We define the importance of an expert relative to a batch of\ntraining examples to be the batchwise sum of the gate values for that expert. We define an additional\nloss Limportance, Which is added to the overall loss function for the model. This loss is equal to\nthe square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned\nscaling factor Wimportance. This additional loss encourages all experts to have equal importance.\nLimportance(X) = Wimportance * CV (Importance(X))\u00b0\nWhile this loss function can ensure equal importance, experts may still receive very different num.\nbers of examples. For example, one expert may receive a few examples with large weights, and\nanother may receive many examples with small weights. This can cause memory and performance\nproblems on distributed hardware. To solve this problem, we introduce a second loss function.\nLioaa , Which ensures balanced loads. Appendix A contains the definition of this function, along\nwith experimental results.\nDataset: This dataset, introduced by (Chelba et al., 2013) consists of shuffled unique sentence:\nfrom news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words.\nPrevious State-of-the-Art: The best previously published results (Jozefowicz et al., 2016) use\nmodels consisting of one or more stacked Long Short-Term Memory (LSTM) layers (Hochreite:\n& Schmidhuber, 1997; Gers et al., 2000). The number of parameters in the LSTM layers of thes\u00a2\nmodels vary from 2 million to 151 million. Quality increases greatly with parameter count, as dc\ncomputational costs. Results for these models form the top line of Figure 2-right.\nMoE Models: Our models consist of two stacked LSTM layers with a MoE layer between then\n(see Figure 1). We vary the sizes of the layers and the number of experts. For full details on mode\narchitecture, training regimen, additional baselines and results, see Appendix C.\nThe results of these models are shown in Figure 2-left. The model with 4 always-active experts\nperformed (unsurprisingly) similarly to the computationally-matched baseline models, while the\nlargest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set.\nTest Perplexity\n\n45\n\n&\nSs\n\ne\na\n\nFF Baseline Models 55 [ED LsT\u2122 Models:\n|F-FF Flat MoE Models MoE Models\n[EET Hierarchical MoE Models 50\n45\n2\n3\n240\n\u00a9\na\n$35\n2\n30\n10\" 10\u00b0 10\u00b0 10\u00b0 10\u00b0 10\" 10\u00b0\nFigure 2: Model comparison on 1-Billion-Word Language-Modeling Benchmark. On the left, we\nplot test perplexity as a function of model capacity for models with similar computational budgets\nof approximately 8-million-ops-per-timestep. On the right, we plot test perplexity as a function of\ncomputational budget. The top line represents the LSTM models from (Jozefowicz et al., 2016).\nThe bottom line represents 4-billion parameter MoE models with different computational budgets.\nVaried Computation, High Capacity: In addition to the largest model from the previous section,\nwe trained two more MoE models with similarly high capacity (4 billion parameters), but higher\ncomputation budgets. These models had larger LSTMs, and fewer but larger experts. Details can\nLow Computation, Varied Capacity: To investigate the effects of adding capacity, we trained\na series of MoE models all with roughly equal computational costs: about 8 million multiply-and-\nadds per training example per timestep in the forwards pass, excluding the softmax layer. We call\nthis metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and\nmodels with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1\nmillion parameters. For all the MoE layers, 4 experts were active per input.\nTable 1: Summary of high-capacity MoE-augmented models with varying computational budgets\nvs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C.\nbe found in Appendix C.2. Results of these three models form the bottom line of Figure 2-right.\nTable 1 compares the results of these models to the best previously-published result on this dataset .\nEven the fastest of these models beats the best published result (when controlling for the number of\ntraining epochs), despite requiring only 6% of the computation.\nComputational Efficiency: We trained our models using TensorFlow (Abadi et al., 2016) on clus\nters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational effi\nciency in TFLOPS/GPU by dividing the number of floating point operations required to proces:\none training batch by the observed step time and the number of GPUs in the cluster. The operatiot\ncounts used here are higher than the ones we report in our ops/timestep numbers in that we includ\nthe backwards pass, we include the importance-sampling-based training of the softmax layer, anc\nwe count a multiply-and-add as two separate operations. For all of our MoE models, the floatin;\npoint operations involved in the experts represent between 37% and 46% of the total.\nFor our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29\nTFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-\n0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available\nparallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely\ndue to the larger matrices. These numbers represent a significant fraction of the theoretical maximum\nof 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix C, Table 7.\nlest Perplexity\no -\n& 8\n\no\nis}\n\n\u00b0\u00b0 After Training on 10B words\n@-@ After Training on 100B words\n\n10\" 10\u00b0 10\u00b0 10\u00b0 10\"\n\nModel Parametere Excluding Emheddinag and Softmay\nFigure 3: Language modeling on a 100 billion word corpus. Models have similar computationa\nbudgets (8 million ops/timestep).\nOn the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as\nthe number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure 2-left. We\nhypothesized that for a larger training set, even higher capacities would produce significant quality\nimprovements.\nWe constructed a similar training set consisting of shuffled unique sentences from Google\u2019s internal\nnews corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a\nseries of models with similar computational costs of about 8 million ops/timestep. In addition to a\nbaseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024.\nTest Test #Parameters ops/timestep Training TFLOPS\nPerplexity | Perplexity | excluding embedding Time /GPU\n10 epochs | 100 epochs |_and softmax layers 10 epochs\nBest Published Results 34.7 30.6 151 million 151 million | 59 hours, 32 k40s 1.09\nLow-Budget MoE Model 34.1 4303 million 8.9 million |15 hours, 16k40s] 0.74\nMedium-Budget MoE Model 31.3 4313 million 33.8 million | 17 hours, 32 k40s 1.22\nHigh-Budget MoE Model 28.0 4371 million 142.7 million | 47 hours, 32 k40s| 1.56\n4371 million\n4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in th\nMoE layer. Details on architecture, training, and results are given in Appendix D.\nResults: Figure 3 shows test perplexity as a function of capacity after training on 10 billion words\n(top line) and 100 billion words (bottom line). When training over the full 100 billion words, test\nperplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lowe!\nthan the computationally matched baseline, but degrades at 131072 experts, possibly a result of toc\nmuch sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased\nmodel capacity helps more on larger training sets.\nEven at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at.\nrespectable 0.72 TFLOPS/GPU."}, {"section_index": "6", "section_name": "5.3. MACHINE TRANSLATION (SINGLE LANGUAGE PAIR)", "section_text": "Model Architecture: Our model was a modified version of the GNMT model described in (Wu\net al., 2016). To reduce computation, we decreased the number of LSTM layers in the encoder\nand decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder\n(between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up\nto 2048 experts each with about two million parameters, adding a total of about 8 billion parameters\nto the models. Further details on model architecture, testing procedure and results can be found in\nAppendix E.\nTable 2: Results on WMT\u2019 14 En-\u2014> Fr newstest2014 (bold values represent best results)\nTable 3: Results on WMT\u2019 14 En > De newstest2014 (bold values represent best results).\nModel Eval Eval Test Test | ops/timestep Total Training\nPerplexity | BLEU | Perplexity | BLEU #Parameters Time\n\nMoE with 2048 Experts 2.60 37.27 2.69 36.57 85M. 8.7B T day/64 k40s\n\nGNMT (Wu et al., 2016) 2.78 35.80 2.87 35.56 214M 278M 6 days/96 k80s\nDatasets: We benchmarked our method on the WMT\u2019 14 En->Fr and En\u2014De corpora, whose\ntraining sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental proto-\ncols were also similar to those in (Wu et al., 2016): newstest2014 was used as the test set to compare\nagainst previous work (Luong et al., 2015a; Zhou et al., 2016; Wu et al., 2016), while the combina-\ntion of newstest2012 and newstest2013 was used as the development set. We also tested the same\nmodel on Google\u2019s Production English to French data.\nModel Test Test |ops/timenstep Total Training\nPerplexity |BLEU #Parameters Time\n\nMoE with 2048 Experts 2.69 | 40.35 85M 8.7B 3 days/64 k40s\nMoE with 2048 Experts (longer training) 2.63 | 40.56 85M 8.7B 6 days/64 k40s\nGNMT (Wt et al., 2016) 2.79 139.22 214M 278M {6 days/96 k80s\nGNMT+RL (Wu et al., 2016) 2.96 | 39.92 214M 278M __ |6 days/96 k80s\nPBMT (Durrani et al., 2014) 37.0\n\nLSTM (6-layer) (Luong et al., 2015b) 31.5\n\nLSTM (6-layer+PosUnk) (Luong et al., 2015b) 33.1\n\nDeepAtt (Zhou et al., 2016) 37.7\n\nDeepAtt+PosUnk (Zhou et al., 2016) 39.2\nModel Test Test | ops/timestep Total Training\nPerplexity | BLEU #Parameters Time\nMoE with 2048 Experts 4.64 26.03 85M 8.7B I day/64 k40s\nGNMT (Wu et al., 2016) 5.25 24.91 214M 278M I day/96 k80s\nGNMT +RL (Wu et al., 2016) 8.08 24.66 214M 278M 1 day/96 k80s\nPBMT (Durrani et al., 2014) 20.7\n20.6\n\nDeepAtt (Zhou et al., 2016)\nTable 4: Results on the Google Production En\u2014 Fr dataset (bold values represent best results).\nResults: Tables 2, 3, and 4 show the results of our largest models, compared with published\nresults. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT\u2019 14 En-+Fr and\nEn\u2014De benchmarks. As our models did not use RL refinement, these results constitute significant\ngains of 1.34 and 1.12 BLEU score on top of the strong baselines in (Wu et al., 2016). The perplexity\nscores are also better.\u201d On the Google Production dataset, our model achieved 1.01 higher test BLEU\nscore even after training for only one sixth of the time."}, {"section_index": "7", "section_name": "5.4 MULTILINGUAL MACHINE TRANSLATION", "section_text": "Dataset: (Johnson et al., 2016) train a single GNMT (Wu et al., 2016) model on a very large com-\nbined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately\ntrained single-pair GNMT models. This is not surprising, given that the twelve models have 12\ntimes the capacity and twelve times the aggregate training of the one model. We repeat this ex-\nperiment with a single MoE-augmented model. See Appendix E for details on model architecture\nWe train our model on the same dataset as (Johnson et al., 2016) and process the same number ot\ntraining examples (about 3 billion sentence pairs). Our training time was shorter due to the lowe:\ncomputational budget of our model.\nResults: Results for the single-pair GNMT models, the multilingual GNMT model and the mul\ntilingual MoE model are given in Table 5. The MoE model achieves 19% lower perplexity on th\ndev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beat!\nthe multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and evet\nbeats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on Englis|\n\u2014 Korean seems to be a result of severe overtraining, as for the rarer language pairs a small numbe\nof real examples were highly oversampled in the training corpus.\nTable 5: Multilingual Machine Translation (bold values represent best results).\nGNMT-Mono] GNMT-Multi MoE-Multi ~|MoE-Multi vs.\nGNMT-Multi\nParameters |278M / model 278M 8.7B\nops/timestep 212M 212M 102M\ntraining time, hardware various 21 days, 96 k20s|12 days, 64 k40s\nPerplexity (dev) 4.14 3.35 -19%\nFrench \u2014 English Test BLEU 36.47 34.40 37.46 +3.06\nGerman \u2014 English Test BLEU 31.77 31.17 34.80 +3.63\nJapanese \u2014 English Test BLEU 23.41 21.62 25.91 +4.29\nKorean + English Test BLEU 25.42 22.87 28.71 +5.84\nPortuguese \u2014 English Test BLEU 44.40 42.53 46.13 +3.60\nSpanish + English Test BLEU 38.00 36.04 39.39 +3.35\nEnglish + French Test BLEU 35.37 34.00 36.59 +2.59\nEnglish \u2014 German Test BLEU 26.43 23.15 24.53 +1.38\nEnglish + Japanese Test BLEU 23.66 21.10 22.78 +1.68\nEnglish + Korean Test BLEU 19.75 18.41 16.62 -1.79\nEnglish + Portuguese Test BLEU 38.40 37.35 37.90 +0.55\nEnglish + Spanish Test BLEU 34.50 34.25 36.21 +1.96"}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "\u201cReported perplexities relative to the tokenization used by both our models and GNMT.\nThis work is the first to demonstrate major wins from conditional computation in deep networks.\nWe carefully identified the design considerations and challenges of conditional computing and ad-\ndressed them with a combination of algorithmic and engineering solutions. While we focused on\ntext, conditional computation may help in other domains as well, provided sufficiently large train-\ning sets. We look forward to seeing many novel implementations and applications of conditional\ncomputation in the years to come.\nWe would like to thank all of the members of the Google Brain and Google Translate teams who\nhelped us with this project, in particular Zhifeng Chen, Yonghui Wu, and Melvin Johnson. Thanks\nalso to our anonymous ICLR reviewers for the helpful suggestions on making this paper better."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computation\nin neural networks for faster models. arXiv preprint arXiv:1511.06297,. 2015.\nYoshua Bengio, Nicholas L\u00e9onard, and Aaron Courville. Estimating or propagating gradients\nthrough stochastic neurons for conditional computation. arXiv preprint arXiv: 1308.3432, 2013.\nRonan Collobert, Samy Bengio, and Yoshua Bengio. A parallel mixture of SVMs for very large\nscale problems. Neural Computing, 2002.\nAndrew Davis and Itamar Arel. Low-rank approximations for conditional feedforward computation\nin deep neural networks. arXiv preprint arXiv: 1312.4461, 2013.\nMarc Peter Deisenroth and Jun Wei Ng. Distributed Gaussian processes. In ICML, 2015.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning ani\nstochastic optimization, 2010.\nDavid Eigen, Marc\u2019 Aurelio Ranzato, and Ilya Sutskever. Learning factored representations in a dee\nmixture of experts. arXiv preprint arXiv: 1312.4314, 2013.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre-\ngory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Good-\nfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz\nKaiser, Manjunath Kudlur, Josh Levenberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Gor-\ndon Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal\nTalwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Vi\u00e9gas, Oriol Vinyals,\nPete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaogiang Zheng. Tensorflow:\nLarge-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467,\n9016. URL httovo: //arxiv.ora/abs/1603.04467.\nGeoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitl\nAndrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networ\nfor acoustic modeling in speech recognition: The shared views of four research groups. IEF\nSignal Processing Magazine, 2012.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural Computation, 1997\nSergey loffe and Christian Szegedy. Batch normalization: Accelerating deep network training b\nreducing internal covariate shift. arXiv preprint arXiv: 1502.03167, 2015.\nMelvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil\nThorat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey\nDean. Google\u2019s multilingual neural machine translation system: Enabling zero-shot translation.\nCoRR, abs/1611.04558, 2016. URL httv://arxiv.ordg/abs/1611.04558.\nMichael I. Jordan and Robert A. Jacobs. Hierarchical mixtures of experts and the EM algorithm.\nNeural Computing, 1994.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring th\nlimits of language modeling. arXiv preprint arXiv: 1602.02410, 2016.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In JCLR, 2015.\nReinhard Kneser and Hermann. Ney. Improved backingoff for m-gram language modeling., 1995\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo\nlutional neural networks. In NJPS, 2012.\nQuoc V. Le, Marc\u2019 Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado,\nJeffrey Dean, and Andrew Y. Ng. Building high-level features using large scale unsupervised\nlearning. In JCML, 2012.\nMinh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attentior\nbased neural machine translation. EMNLP, 2015a.\nHasim Sak, Andrew W Senior, and Francoise Beaufays. Long short-term memory recurrent neural\nnetwork architectures for large scale acoustic modeling. In IVTERSPEECH, pp. 338-342, 2014.\nMike Schuster and Kaisuke Nakajima. Japanese and Korean voice search. ICASSP, 2012\nBabak Shahbaba and Radford Neal. Nonlinear models using dirichlet process mixtures. JMLR\n2009.\nMinh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. Addressing\nthe rare word problem in neural machine translation. ACL, 2015b.\nIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks.\nIn NIPS, 2014.\nVolker Tresp. Mixtures of Gaussian Processes. In NJPS, 2001.\nBangpeng Yao, Dirk Walther, Diane Beck, and Li Fei-fei. Hierarchical mixture of classificatior\nexperts uncovers interactions between brain regions. In NJPS. 2009.\nJie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward\nconnections for neural machine translation. arXiv preprint arXiv: 1606.04199, 2016.\n_ucas Theis and Matthias Bethge. Generative image modeling using spatial LSTMs. In NIPS, 2015"}, {"section_index": "10", "section_name": "A LOAD-BALANCING LOSS", "section_text": "As discussed in section 4, for load-balancing purposes, we want to define an additional loss functic\nfo encourage experts to receive roughly equal numbers of training examples. Unfortunately, th\nnumber of examples received by an expert is a discrete quantity, so it can not be used in bacl\npropagation. Instead, we define a smooth estimator Load(X) of the number of examples assigned 1\neach expert for a batch X of inputs. The smoothness allows us to back-propagate gradients throug\nihe estimator. This is the purpose of the noise term in the gating function. We define P(z, i) as tk\nprobability that G(a); is nonzero, given a new random choice of noise on element 7, but keepin\nihe already-sampled choices of noise on the other elements. To compute P(,i), we note that tk\nG(x); is nonzero if and only if H(a); is greater than the k\u2019\u201d-greatest element of H(x) excludin\nitself. The probability works out to be:\nWhere kth_excluding(v, k, i) means the kth highest component of v, excluding component 7. Sim-\nplifying, we get:\nP(x, i) o( (a+ W,)i \u2014 kth_excluding(H(z), \u201c0)\n\nSoftplus((x : Whoise)i)\nWhere \u00ae is the CDF of the standard normal distribution.\nLoad(X)i = )> P(2,i)\n\ncEeX\nExperiments: We trained a set of models with identical architecture (the MoE-256 model de-\nscribed in Appendix C), using different values of Wimportance ANd Wioad. We trained each model for\n10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation\nin Importance and Load, as well as ratio of the load on the most overloaded expert to the average\nload. This last value is significant for load balancing purposes on distributed hardware. All of these\nmetrics were averaged over several training batches.\nTable 6: Experiments with different combinations of losses\nmax(Load(X))\n\nWimportance | Woad | Test Perplexity | CV (Importance(X)) | CV (Load(X)) mmean{Load(X)\n0.0 0.0 39.8 3.04 3.01 17.80\n0.2 0.0 35.6 0.06 0.17 1.47\n0.0 0.2 35.7 0.22 0.04 1.15\n0.1 0.1 35.6 0.06 0.05 1.14\n0.01 0.01 35.7 0.48 0.11 1.37\n\n1.0 1.0 35.7 0.03 0.02 1.07\nP(x,i) = Pr((e -W,); + StandardNormal() - Softplus((x - Whoise)i)\n\n> kth_excluding(H (2), k, i)\nP(z,i) = Pr((x -W,); + StandardNormal() - Softplus((x - Whoise)i)\n\n> kth_excluding(H(2),k, i))\nWe can now define the load loss to be the square of the coefficient of variation of the load vector,\nmultiplied by a hand-tuned scaling factor wjgqa-\nLioaa(X) = Wioaa \u00ab CV (Load(X))?\nInitial Load Imbalance: To avoid out-of-memory errors, we need to initialize the network in a\nstate of approximately equal expert load (since the soft constraints need some time to work). To\naccomplish this, we initialize the matrices W, and W,,o;se to all zeros, which yields no signal and\nsome noise.\nIf the number of experts is very large, we can reduce the branching factor by using a two-level\nhierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted com-\nbination of \u201cexperts\", each of which is itself a secondary mixture-of-experts with its own gating\nnetwork.\u2019 If the hierarchical MoE consists of a groups of b experts each, we denote the primary gat-\ning network by Gprimary, the secondary gating networks by (G1, G2..G,), and the expert networks\nby (Eo 0. Eo1..Eq4). The output of the MoE is given by:\na b\n\nYH = \u201c> Gprimary(X)i -Gi( )j E;j(x)\n\ni=1 j=1\nImportance y(X) ij = Ss Gprimary(&)i Gi(@)j\n\nrex\ni),\nLoadprimary(X)i \u00bb Load;(X),\n|X@]\n\nLoad (X)i,j\nLoadprimary and Load; deonte the Load functions for the primary gating network and a\u201d se\n\nondary gating network respectivelv. X \u2122 denotes the subset of X for which G............ (xv). > 0.\nC 1 BILLION WORD LANGUAGE MODELING BENCHMARK - EXPERIMENTAL DETAI\n\nC.1 8-MILLION-OPERATIONS-PER-TIMESTEP MODELS\nModel Architecture: Our model consists of five layers: a word embedding layer, a recurren\nLong Short-Term Memory (LSTM) layer (Hochreiter & Schmidhuber, 1997; Gers et al., 2000), <\nMoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer\nthe number of units in each LSTM layer, and the input and output dimensionality of the MoE laye:\nare all equal to 512. For every layer other than the softmax, we apply dropout (Zaremba et al.\n2014) to the layer output, dropping each activation with probability DropProb, otherwise dividing\nby (1 \u2014 DropProb). After dropout, the output of the previous layer is added to the layer output\nThis residual connection encourages gradient flow (He et al., 2015).\nMoE Layer Architecture: Each expert in the MoE layer is a feed forward network with on\nReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contain\n[512 * 1024] + [1024 \u00ab 512] = 1M parameters. The output of the MoE layer is passed through\nsigmoid function before dropout. We varied the number of experts between models, using ordinar\nMoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts\nWe call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096\nh. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to th\nnumber of GPUs in our cluster. We use Noisy-Top-K Gating (see Section 2.1) with k = 4 for th\nordinary MoE layers and k = 2 at each level of the hierarchical MoE layers. Thus, each example i\nprocessed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2)\nops/timestep each for the desired total of 8M.\nResults: Results are reported in Table 6. All the combinations containing at least one the two\nlosses led to very similar model quality, where having no loss was much worse. Models with higher\nvalues of W)5,q had lower loads on the most overloaded expert.\nIt would seem simpler to let Loadj(X);,; = Load;(X;); , but this would not have a gradient with\nrespect to the primary gating network, so we use the formulation above.\nComputationally-Matched Baselines: The MoE-4 model does not employ sparsity, since all 4\nexperts are always used. In addition, we trained four more computationally-matched baseline model:\nwith no sparsity:\nTraining: The models were trained on a cluster of 16 K40 GPUs using the synchronous methox\ndescribed in Section 3. Each batch consisted of a set of sentences totaling roughly 300,000 words. Ir\nthe interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours fo\nall models, except for MoE-4, which took 18 hours (since all the expert computation was performec\non only 4 of 16 GPUs). We used the Adam optimizer (Kingma & Ba, 2015). The base learnin;\nrate was increased linearly for the first 1000 training steps, and decreased after that so as to b\u00e9\nproportional to the inverse square root of the step number. The Softmax output layer was trainec\nefficiently using importance sampling similarly to the models in (Jozefowicz et al., 2016). For eacl\nmodel, we performed a hyper-parmeter search to find the best dropout probability, in increments o\n0.1.\nTo ensure balanced expert utilization we set Wimportance = 0.1 and Wioaa = 0.1, as described in\nSection 4 and Appendix A.\nResults: We evaluate our model using perplexity on the holdout dataset, used by (Chelba et al.\n2013; Jozefowicz et al., 2016). We follow the standard procedure and sum over all the words in:\ncluding the end of sentence symbol. Results are reported in Table 7. For each model, we repor\nthe test perplexity, the computational budget, the parameter counts, the value of DropProb, and the\ncomputational efficiency.\nTable 7: Model comparison on | Billion Word Language Modeling Benchmark. Models marked\nwith * are from (Jozefowicz et al., 2016).\nModel Test Test ops/timestep | #Params excluding] Total | Drop-] TFLOPS\nPerplexity | Perplexity | (millions) | embed. & softmax | #Params | Prob | per GPU\n10 epochs | (final) (millions) (billions) (observed)\nKneser-Ney 5-gram* 67.6 0.00001 1.8\nLSTM-512-512* 54.1 2.4 24 0.8 0.1\nLSTM-1024-512* 48.2 47 47 0.8 0.1\nLSTM-2048-512* 45.0 43.7 94 94 08 01 0.61\nLSTM-2048-512 44.7 9.4 9.4 0.8 0.1 1.21\n4xLSTM-512 46.0 8.4 84 0.8 0.1 1.07\nMoE-1-Wide 46.1 8.4 84 0.8 0.1 1.29\nMoE-1-Deep 45.7 8.4 8.4 0.8 0.1 1.29\nMoE-4 45.0 8.4 84 0.8 0.1 0.52\nMoE-32 39.7 8.4 37.8 0.9 0.1 0.87\nMoE-256 35.7 8.6 272.9 11 0.1 0.81\nMoE-256-h 36.0 8.4 272.9 1 0.1 0.89\nMoE-1024-h 34.6 8.5 1079.0 19 0.2 0.90\nMoE-4096-h 34.1 8.9 4303.4 5.1 0.2 0.74\n2xLSTM-8192-1024* | 34.7 30.6 151.0 151.0 18 0.25 1.09\nMoE-34M. 31.3 33.8 4313.9 6.0 0.3 1.22\nMoE-143M 28.0 142.7 4371.1 6.0 0.4 1.56\nMoE-1-Wide: The MoE layer consists of a single \"expert\" containing one ReLU-activatec\nhidden layer of size 4096.\n\nMoE-1-Deep: The MoE layer consists of a single \"expert\" containing four ReLU-activatec\nhidden layers, each with size 1024.\n\n4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers.\n\nLSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The out.\nput of the LSTM is projected down to 512 dimensions (Sak et al., 2014). The next timester\nof the LSTM receives the projected output. This is identical to one of the models publishec\nin (Jozefowicz et al., 2016). We re-ran it to account for differences in training regimen, anc\nobtained results very similar to the published ones."}, {"section_index": "11", "section_name": ">.2 MORE EXPENSIVE MODELS", "section_text": "We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding mor\n-omputation in the presence of a large MoE layer. These models have computation budgets of 34)\nind 143M ops/timestep. Similar to the models above, these models use a MoE layer between tw\n_STM layers. The dimensionality of the embedding layer, and the input and output dimensionalit\nof the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 unit:\nor MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 (Sak et al\n2014). MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of siz\n2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of siz\n3192. Both models have 4B parameters in the MoE layers. We searched for the best DropProb fc\n\u00bbach model, and trained each model for 10 epochs.\nThe two models achieved test perplexity of 31.3 and 28.0 respectively, showing that even in the\npresence of a large MoE, more computation is still useful. Results are reported at the bottom of\nTable 7. The larger of the two models has a similar computational budget to the best published\nmodel from the literature, and training times are similar. Comparing after 10 epochs, our model has\na lower test perplexity by 18%."}, {"section_index": "12", "section_name": "D100 BILLION WORD GOOGLE NEWS CORPUS - EXPERIMENTAL DETAILS", "section_text": "Model Architecture: The models are similar in structure to the 8-million-operations-per-timestep\nmodels described in the previous section. We vary the number of experts between models, using\nan ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384.\n\n65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32,\n32, 64, 128, 256 and 256, respectively.\nTraining: Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models,\nwhich are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the param-\neters. For all models, training batch sizes are approximately 2.5 million words. Models are trained\nonce-through over about 100 billion words.\nWe implement several memory optimizations in order to fit up to 1 billion parameters per GPU\nFirst, we do not store the activations of the hidden layers of the experts, but instead recompute them\non the backwards pass. Secondly, we modify the optimizer on the expert parameters to require les:\nauxiliary storage:\nThe Adam optimizer (Kingma & Ba, 2015) keeps first and second moment estimates of the per-\nparameter gradients. This triples the required memory. To avoid keeping a first-moment estimator.\nwe set 3, = 0. To reduce the size of the second moment estimator, we replace it with a factored\napproximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment\nestimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step.\nthe matrix of estimators is taken to be the outer product of those two vectors divided by the mean of\neither one. This technique could similarly be applied to Adagrad (Duchi et al., 2010).\nTable 8: Model comparison on 100 Billion Word Google News Dataset\nModel Test Test ops/timestep | #Params excluding] Total TFLOPS\nPerplexity | Perplexity| (millions) | embed. & softmax | #Params | per GPU\n-l epochs | 1 epoch (millions) (billions) | (observed)\nKneser-Ney 5-gram 67.1 45.3 0.00001 76.0\n4xLSTM-512 54.5 47.0 8.4 8.4 0.1 1.23\nMoE-32 48.5 40.4 8.4 37.8 0.1 0.83\nMoE-256-h 42.8 35.3 8.4 272.9 0.4 111\nMoE-1024-h 40.3 32.7 8.5 1079.0 12 1.14\nMoE-4096-h 38.9 30.9 8.6 4303.4 44 1.07\nMoE-16384-h 38.2 29.7 8.8 17201.0 17.3 0.96\nMoE-65536-h 38.2 28.9 9.2 68791.0 68.9 0.72\nMoE-131072-h 39.8 29.2 9.7 137577.6 137.7 0.30\nResults: We evaluate our model using perplexity on a holdout dataset. Results are reported in\nTable 8. Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE\nmodel than for the baseline model. It is notable that the measured computational efficiency o!\nthe largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely\na result of the fact that, for purposes of comparison to the other models, we did not increase the\ntraining batch size proportionally to the number of GPUs. For comparison, we include results for\na computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-grarr\nmodel with Kneser-Ney smoothing (Kneser & Ney, 1995).4"}, {"section_index": "13", "section_name": "E MACHINE TRANSLATION - EXPERIMENTAL DETAILS", "section_text": "Model Architecture for Single Language Pair MoE Models: Our model is a modified versio:\nof the GNMT model described in (Wu et al., 2016). To reduce computation, we decrease the numbe\nof LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert Mol\nlayers in both the encoder (between layers 2 and 3) and the decoder (between layers | and 2). We us\nan attention mechanism between the encoder and decoder, with the first decoder LSTM receivin;\noutput from and providing input for the attention >. All of the layers in our model have input an\noutput dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensiona\noutput projection. We add residual connections around all LSTM and MoE layers to encourag\nsradient flow (He et al., 2015). Similar to GNMT, to effectively deal with rare words, we used sub\nword units (also known as \u201c\u2018wordpieces\") (Schuster & Nakajima, 2012) for inputs and outputs in ou\nsystem.\nWe use a shared source and target vocabulary of 32K wordpieces. We also used the same beam\nsearch technique as proposed in (Wu et al., 2016).\nWe train models with different numbers of experts in the MoE layers. In addition to a baselin\nmodel with no MoE layers, we train models with flat MoE layers containing 32 experts, and model.\nwith hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use k = 4 anc\nthe hierarchical MoE models use k = 2 at each level of the gating network. Thus, each input i\nprocessed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forwar\u00ab\nnetwork with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains [512 \u00bb\n2048] + [2048 \u00ab 512] = 2M parameters. The output of the MoE layer is passed through a sigmoi\nfunction. We use the strictly-balanced gating function described in Appendix F.\nTraining: We trained our networks using the Adam optimizer (Kingma & Ba, 2015). The base\nlearning rate was increased linearly for the first 2000 training steps, held constant for an additional\n8000 steps, and decreased after that so as to be proportional to the inverse square root of the ster\nnumber. For the single-language-pair models, similarly to (Wu et al., 2016), we applied dropout\n(Zaremba et al., 2014) to the output of all embedding, LSTM and MoE layers, using DropProb =\n0.4. Training was done synchronously on a cluster of up to 64 GPUs as described in section 3. Eact\ntraining batch consisted of a set of sentence pairs containing roughly 16000 words per GPU.\nTo ensure balanced expert utilization we set Wimportance = 0.01 and Wicad = 0.01, as described in\nSection 4 and Appendix A.\n\u201cWhile the original size of the corpus was 130 billion words, the neural models were trained for a maximun\nof 100 billion words. The reported Kneser-Ney 5-gram models were trained over 13 billion and 130 billiot\nwords respectively, giving them a slight advantage over the other reported results.\n>For performance reasons, we use a slightly different attention function from the one described in (Wu et al.\n2016) - See Appendix G\nModel Architecture for Multilingual MoE Model: We used the same model architecture as\nfor the single-language-pair models, with the following exceptions: We used noisy-top-k gating as\ndescribed in Section 2.1, not the scheme from Appendix F. The MoE layers in the encoder and\ndecoder are non-hierarchical MoEs with n = 512 experts, and k = 2. Each expert has a larger\nhidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the\ncomputational budget of the entire model from 85M to 102M ops/timestep.\nMetrics: We evaluated our models using the perplexity and the standard BLEU score metric. We\nreported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public\nimplementation of Moses (on Github), which was also used in (Luong et al., 2015a).\nResults: Tables 2, 3 and 4 in Section 5.3 show comparisons of our results to other publishec\nmethods. Figure 4 shows test perplexity as a function of number of words in the (training data\u2019s\nsource sentences processed for models with different numbers of experts. As can be seen from the\nFigure, as we increased the number of experts to approach 2048, the test perplexity of our mode\ncontinued to improve.\nPerplexity\n\n60\n\nWexperis=0\nss HExperts=22\n#Experts=512 7\nso #Expers=2048\n5\n4s\n2\n5\n3\n4a 34\n\u00a7\n\u00e9\n33\nTHe ee ec ee ees 4\n30 ee ae\n| SNe ee\n3 m = ay\n25 |\n20 2\na a a a 5 a \u20183 3 2\nNumber of source words processed 189\n\nNumber of source words processed\n\n1010\n55\n\n50\n\nPerplexity\n\n30\n\n2s5|\n\n#Experts=0\nExper\nHExperts=512\n\nHExperts=2048\n\nPerplexity\n\nsoa #Exper\n\nee HExperts=0\ns+ #Exper\nvoy HExperts=512\n\n048\n\n20)\nv\n\nas\n\n79\n\n15\n\n20\nWe found that the experts indeed become highly specialized by syntax and/or semantics, as can be\nseen in Table 9. For example, one expert is used when the indefinite article \u201ca\" introduces the direct\nobject in a verb phrase indicating importance or leadership.\nExpert 381\n\nExpert 752\n\nExpert 2004\n\n... with researchers , ...\n... to innovation .\n... tics researchers .\n... the generation of ...\n... technology innovations is ...\n\n... technological innovations , ...\n.. Support innovation throughout ...\n\n... role innovation will ...\n... research scienti st ...\n\n.. promoting innovation where ...\n\n... plays a core ...\n... plays a critical ...\n\n... provides a legislative ...\n. play a leading ...\n\n. assume a leadership ...\n... plays a central ...\n\ntaken a leading .\n\n.. established a reconciliation ...\n\n... played a vital ...\n... have a central ...\n\n.. provides quick a\n... of volatile organi\n\n.. with rapidly growing ...\n.. under static conditions ...\n\n... to swift ly ...\n... to dras tically ...\n... the rapid and ...\n\n... the fast est ...\n\n... the Quick Method ...\n\n... Tec urrent ) ...\nDue to some peculiarities in our infrastructure which have since been fixed, at the time we ran some\nof the machine translation experiments, our models ran faster if every expert received exactly the\nsame batch size. To accommodate this, we used a different gating function which we describe below.\nRecall that we define the softmax gating function to be:\nFigure 4: Perplexity on WMT\u2019 14 En-\u2014 Fr (left) and Google Production En\u2014 Fr (right) datasets as\na function of number of words processed. The large differences between models at the beginning\nof training are due to different batch sizes. All models incur the same computational budget (85M\nops/timestep) except the one with no experts.\nTable 9: Contexts corresponding to a few of the 2048 experts in the MoE layer in the encoder portion\nof the WMT\u2019 14 En-\u2014 Fr translation model. For each expert 7, we sort the inputs in a training batch\nin decreasing order of G(x);, and show the words surrounding the corresponding positions in the\ninput sentences.\n... played a vital ...\n... have a central ...\nG,(x) = Softmax(x - W,)\nSparse Gating (alternate formulation): To obtain a sparse gating vector, we multiply G,()\ncomponent-wise with a sparse mask M(G,(x)) and normalize the output. The mask itself is a\nfunction of G(x) and specifies which experts are assigned to each input example:\nGo(x)i:M(Go(x))i\n\nG(n)i a ccar )jM(Go(2));\nTop-K Mask: To implement top-k gating in this formulation, we would let M(v) = TopK(v,k),\nwhere:\n1 if v; is in the top k elements of v\n\nPopK (vk): = {0 otherwise.\nBatchwise Mask: To force each expert to receive the exact same number of examples, we intr\nduce an alternative mask function, Myatchwise(X,m), which operates over batches of input vecto\nInstead of keeping the top & values per example, we keep the top m values per expert across t\n\ntraining batch, where m = Bx! , so that each example is sent to an average of k experts.\nraining batch, where mm = \u2014\u2014., so that each example is sent to an average of & experts.\n\n>\n1 if X,,, is in the top m values for to expert\nMoatchwise(X,m) j,i ={ i P P\n\n0 otherwise\n1 ifa;>T,\n\nMthreshota(,T)i = {i otherwise\nTo learn the threshold values, we apply an additional loss at training time which is minimized when\nthe batchwise mask and the threshold mask are identical.\nIX} on\n\nsoatchwise(X,T,m) = S> > (Minreshotale,T)i \u2014 Moatchwise(X,m)j,i)(Xje \u2014 T\n\nj=l i=l\nWhere U and W are trainable weight matrices and V is a trainable weight vector.\nFor performance reasons, in our models, we used a slightly different attention functiot\nWith our attention function, we can simultaneously compute the attention function on multiple\nsource time steps and multiple target time steps using optimized matrix multiplications. We founc\nlittle difference in quality between the two functions.\nAs our experiments suggest and also observed in (loffe & Szegedy, 2015), using a batchwise func-\ntion during training (such as Mpatchwise) requires modifications to the inference when we may not\nhave a large batch of examples. Our solution to this is to train a vector T of per-expert threshold\nvalues to approximate the effects of the batchwise mask. We use the following mask at inference\ntime:\nThe attention mechanism described in GNMT (Wu et al., 2016) involves a learned \u201cAttention Func-\ntion\" A(x;,y;) which takes a \u201csource vector\" \u00ab; and a \u201ctarget vector\" y;, and must be computed for\nevery source time step 7 and target time step 7. In GNMT, the attention function is implemented as\na feed forward neural network with a hidden layer of size n. It can be expressed as:\nn\n\nAanaur(i,yj) = S_ Vatanh((2iU)a + (yjW)a)\ndel\nA(xi,4j) = Ss Vatanh((x;U)a)tanh((yj;W)a)\nd=1"}]
B1hdzd5lg
[{"section_index": "0", "section_name": "WORDS OR CHARACTERS? FINE-GRAINED GATING\nFOR READING COMPREHENSION", "section_text": "zhiliny, wcohen, rsalakhu}\u00e9cs .cmu.edu\nPrevious work combines word-level and character-level representations using con-\ncatenation or scalar weighting, which is suboptimal for high-level tasks like read-\ning comprehension. We present a fine-grained gating mechanism to dynamically\ncombine word-level and character-level representations based on properties of the\nwords. We also extend the idea of fine-grained gating to modeling the interaction\nbetween questions and paragraphs for reading comprehension. Experiments show\nthat our approach can improve the performance on reading comprehension tasks.\nachieving new state-of-the-art results on the Children\u2019s Book Test and Who Did\nWhat datasets. To demonstrate the generality of our gating mechanism, we alsc\nshow improved results on a social media tag prediction task!!]"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Finding semantically meaningful representations of the words (also called tokens) in a document i\nnecessary for strong performance in Natural Language Processing tasks. In neural networks, token\nare mainly represented in two ways, either using word-level representations or character-level repre\nsentations. Word-level representations are obtained from a lookup table, where each unique token i\nrepresented as a vector. Character-level representations are usually obtained by applying recurren\nneural networks (RNNs) or convolutional neural networks (CNNs) on the character sequence of th\ntoken, and their hidden states are combined to form the representation. Word-level representation\nare good at memorizing the semantics of the tokens while character-level representations are mor\nsuitable for modeling sub-word morphologies (Ling et al.| {2015} [Yang et al.|{2016a). For example\nconsidering \u201c\u2018cat\u201d and \u201ccats\u201d, word-level representations can only learn the similarities between th\ntwo tokens by training on a large amount of training data, while character-level representations, b\ndesign, can easily capture the similarities. Character-level representations are also used to alleviat\nthe difficulties of modeling out-of-vocabulary (OOV) tokens (Luong & Manning} /|2016).\nHybrid word-character models have been proposed to leverage the advantages of both word-level\nand character-level represent The most commonly used method is to concatenate these two\nrepresentations . However, concatenating word-level and character-level repre-\nsentations is technically problematic. For frequent tokens, the word-level representations are usually\naccurately estimated during the training process, and thus introducing character-level representa-\ntions can potentially bias the entire representations. For infrequent tokens, the estimation of word-\nlevel representations have high variance, which will have negative effects when combined with the\ncharacter-level representations. To address this issue, recently Miyamoto & Cho} (2016) introduced\na scalar gate conditioned on the word-level representations to control the ratio of the two repre-\nsentations. However, for the task of reading comprehension, preliminary experiments showed that\nthis method was not able to improve the performance over concatenation. There are two possible\nreasons. First, word-level representations might not contain sufficient information to support the\ndecisions of selecting between the two representations. Second, using a scalar gate means applying\nthe same ratio for each of the dimensions, which can be suboptimal.\nIn this work, we present a fine-grained gating mechanism to combine the word-level and character-\nlevel representations. We compute a vector gate as a linear projection of the token features followed\nZhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W. Cohen, Ruslan Salakhutdinov"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "by a sigmoid activation. We then multiplicatively apply the gate to the character-level and word.\nlevel representations. Each dimension of the gate controls how much information is flowed from\nthe word-level and character-level representations respectively. We use named entity tags, part-of.\nspeech tags, document frequencies, and word-level representations as the features for token proper\nties which determine the gate. More generally, our fine-grained gating mechanism can be used tc\n\nmodel multiple levels of structure in language, including words, characters, phrases, sentences anc\nparagraphs. In this work we focus on studying the effects on word-character gating.\nTo better tackle the problem of reading comprehension, we also extend the idea of fine-graine:\ngating for modeling the interaction between documents and queries. Previous work has shown th\nimportance of modeling interactions between document and query tokens by introducing variou\n\nattention architectures for the task (Hermann et al. Kadlec et al. . Most of these us\n\nan inner product between the two representations to compute the relative importance of documen\ntokens. The Gated-Attention Reader showed improved performance by re\nplacing the inner-product with an element-wise product to allow for better matching at the semanti\nlevel. However, they use aggregated representations of the query which may lead to loss of infor\nmation. In this work we use a fine-grained gating mechanism for each token in the paragraph an\neach token in the query. The fine-grained gating mechanism applies an element-wise multiplicatio:\nof the two representations.\nWe show improved performance on reading comprehension datasets, including Children\u2019s Book Tes:\n(CBT), Who Did What, and SQUAD. On CBT, our approach achieves new state-of-the-art results\nwithout using an ensemble. Our model also improves over state-of-the-art results on the Who Dic\nWhat dataset. To demonstrate the generality of our method, we apply our word-character fine-\ngrained gating mechanism to a social media tag prediction task and show improved performance\nover previous methods.\nOur contributions are two-fold. First, we present a fine-grained word-character gating mechanism\nand show improved performance on a variety of tasks including reading comprehension. Second.\nto better tackle the reading comprehension tasks, we extend our fine-grained gating approach to\nmodeling the interaction between documents and queries."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Similar to the idea of gating, multiplicative integration has also been shown to provide a benefit\nin various settings. |Yang et al.| (2014) find that multiplicative operations are superior to additive\noperations in modeling relations. |Wu et al. (2016) propose to use Hadamard product to replace sum\noperation in recurrent networks, which gives a significant performance boost over existing RNN\n\nmodels. |Dhingra et al.|(2016a) use a multiplicative gating mechanism to achieve state-of-the-art\nresults on question answering benchmarks.\nHybrid word-character models have been proposed to take advantages of both word-level and\ncharacter-level representations. (2015) introduce a compositional character to word\n(C2W) model based on bidirectional LSTMs. describe a model that employs a\nconvolutional neural network (CNN) and a highway network over characters for language model-\ning. |Miyamoto & Cho}|(2016) use a gate to adaptively find the optimal mixture of the character-level\nand word-level inputs. Yang et al. (2016a) employ deep gated recurrent units on both character\nand word levels to encode morphology and context information. Concurrent to our work, {Rei et al.|\nemployed a similar gating idea to combine word-level and character-level representations,\nbut their focus is on low-level sequence tagging tasks and the gate is not conditioned on linguistic\nfeatures.\nThe gating mechanism is widely used in sequence modeling. Long short-term memory (LSTM)\n\nnetworks are designed to deal with vanishing gradients through\nthe gating mechanism. Similar to LSTM, Gated Recurrent Unit (GRU) was proposed by|Cho et al.\n\n, which also uses gating units to modulate the flow of information. The gating mechanism\n\ncan also be viewed as a form of attention mechanism (Bahdanau et al. 2015} \u2018Yang et al.| 2016b)\n\nover two inputs.\nReading comprehension is a challenging task for machines. A variety of models have been proposed\n\nto extract answers from given text (Hill et al.]/2016}/Kadlec et al.]{2016}{Trischler et al.|/2016}/Chen|\n2016 2016 2016). [Yu et al.|(2016) propose a dynamic chunk reader\n\nto extract and rank a set of answer candidates from a given document to answer questions. |Wang\n(2016) introduce an end-to-end neural architecture which incorporates match-LSTM an\n\npointer networks (Vinyals et al.][2015).\nIn this section, we will describe our fine-grained gating approach in the context of reading com-\nprehension. We first introduce the settings of reading comprehension tasks and a general neural\nnetwork architecture. We will then describe our word-character gating and document-query gating\napproaches respectively."}, {"section_index": "4", "section_name": "3.1 READING COMPREHENSION SETTING", "section_text": "The reading comprehension task involves a document P = (p1,p2,--:,par) and a query Q =\n(q1,92,\u00b0*+;qN), where M and N are the lengths of the document and the query respectively. Eac\ntoken p; is denoted as (w;,C;), where w; is a one-hot encoding of the token in the vocabulary an\nC; is a matrix with each row representing a one-hot encoding of a character. Each token in the quer\nq; is similarly defined. We use 7 as a subscript for documents and j for queries. The output of th\nproblem is an answer a. which can either be an index or a span of indices in the document.\nNow we describe a general architecture used in this work, which is a generalization of the gatec\nattention reader (Dhingra et al.|/2016a). For each token in the document and the query, we compute\na vector representation using a function f. More specifically, for each token p; in the document\nwe have h? = f(w;,C;). The same function f is also applied to the tokens in the query. Le\nH) and H, denote the vector representations computed by f for tokens in documents and queries\nrespectively. In Section[3.2] we will discuss the \u201c\u201cword-character\u201d fine-grained gating used to define\nthe function f.\nSuppose that we have a network of K layers. At the k-th layer, we apply RNNs on Hi! and H, to\n\nobtain hidden states P* and Q*, where P* is a M x d matrix and Q* is a N x d matrix with d being\nthe number of hidden units in the RNNs. Then we use a function r to compute a new representation\nfor the document H} = r(P*,Q*). In Section 3.3} we will introduce the \u201cdocument-query\u201d fine-\nerained gating used to define the function r.\nAfter going through Kx layers, we predict the answer index a using a softmax layer over hidder\n\nstates Hy. For datasets where the answer is a span of text, we use two softmax layers for the star\n\nand end indices respectively.\nGiven a one-hot encoding w; and a character sequence C;, we now describe how to compute the\nvector representation h; = f(w;,C;) for the token. In the rest of the section, we will drop the\nsubscript 2 for notation simplicity.\nWe first apply an RNN on C and take the hidden state in the last time step c as the character-level\n\nrepresentation (Yang et al.||2016a). Let E denote the token embedding lookup table. We perform a\nmatrix-vector multiplication Ew to obtain a word-level representation. We assume c and Ew have\n\nthe same length d, in this work.\nThe gate is computed as follows:\nPrevious methods defined f using the word-level representation Ew (Collobert et al.|/2011), the\ncharacter-level representation c ), or the concatenation [Ew; c] (Yang et al.|[2016a).\nUnlike these methods, we propose to use a gate to dynamically choose between the word-level and\ncharacter-level representations based on the properties of the token. Let v denote a feature vector\nthat encodes these properties. In this work, we use the concatenation of named entity tags, part-\nof-speech tags, binned document frequency vectors, and the word-level representations to form the\n\nfeature vector v. Let d,, denote the length of v.\ng = 0(W,v + by)\nCombined\n\nRepresentation\n@ee- 1-\u00ab\n@ \u00a9 \u00a9 \u2014M\\ _ Sigmoia 7\n\u00a9ee 000 \u00a92000\n| I Tet\nLookup Char RNN NER POS Frequency Lookup\n\nWord token\nFigure 1: Word-character fine-grained gating. The two lookup tables are shared. \u201cNER\u201d, \u201cPOS\u201d, \u201cfrequency\nrefer to named entity tags, part-of-speech tags, document frequency features.\n\u2018he final representation is computed using a fine-grained gating mechanism\nh\n\nf(c,w)\n\ngOc+(1\nAn illustration of our fine-grained gating mechanism is shown in Figure [I] Intuitively speaking\nwhen the gate g has high values, more information flows from the character-level representation t\nthe final representation; when the gate g has low values, the final representation is dominated by th\nword-level representation.\nThough|Miyamoto & Cho] (2016) also use a gate to choose between word-level and character-level\n\nrepresentations, our method is different in two ways. First, we use a more fine-grained gating mech-\nanism, i.e., vector gates rather than scalar gates. Second, we condition the gate on features that better\nreflect the properties of the token. For example, for noun phrases and entities, we would expect the\ngate to bias towards character-level representations because noun phrases and entities are usually\nless common and display richer morphological structure. Experiments show that these changes are\nkey to the performance improvements for reading comprehension tasks.\nOur approach can be further generalized to a setting of multi-level networks so that we can combine\nmultiple levels of representations using fine-grained gating mechanisms, which we leave for future\nwork."}, {"section_index": "5", "section_name": "3.3. DOCUMENT-QUERY FINE-GRAINED GATING", "section_text": "Given the hidden states P* and Q*, we now describe how to compute a representation H* thai\nencodes the interactions between the document and the query. In this section, we drop the superscrip\nk (the layer number) for notation simplicity. Let p; denote the i-th row of P and q; denote the j-row\nof Q. Let d;, denote the lengths of p; and q;.\nAttention-over-attention (AoA) (Cui et al. ) defines a dot product between each pair of tokens\nin the document and the query, i.e., pj; qj, followed by row-wise and column-wise softmax non.\nlinearities. AoA imposes pair-wise interactions between the document and the query, but using <\ndot product is potentially not expressive enough and hard to generalize to multi-layer networks. The\ngated attention (GA) reader ) defines an element-wise product as p;\u00aeg; where\ngi is a gate computed by attention mechanism on the token p; and the entire query. The intuitior\nfor the gate g; is to attend to important information in the document. However, there is no direc\npair-wise interaction between each token pair.\nwhere W, and b, are the model parameters with shapes d. x d, and de, and o denotes an element-\nwise sigmoid function.\n(M*N) *d\n\n@ e @ eee ee eae e@ [ } C }\n@ @ @ Element-wise product @ e @\nDocument @ @ @- ee@ \u2014 Tanh \u2014 Attention -\u2014@ @@\neee eee\nHidden States th vector M*d\nM*d\n\neee \u00a9\n\ne@ ee... @ Hidden States\n\neee @o Nd\n\nQuery\nFigure 2: Paragraph-question fine-grained gating.\nWe present a fine-grained gating method that combines the advantages of the above methods (i.e.\nboth pairwise and element-wise). We compute the pairwise element-wise product between the hid-\nden states in the document and the query, as shown in Figure[2| More specifically, for p; and q;, we\nhave\nI,; = tanh(p; \u00a9 q;)\nh; = \u00bb softmax(uj, 1; + w/ wjbni + bn2)Ii;\nj\nwhere uy, is a d,-dimensional model parameter, b;,; and bj2 are scalar model parameters, w; and\nw; are one-hot encodings for p; and q; respectively. We additionally use one-hot encodings in the\nattention mechanism to reinforce the matching between the same tokens since such information is\nnot fully preserved in I;; when k is large. The softmax nonlinearity is applied over all j\u2019s. The final\nhidden states H are formed by concatenating the h;\u2019s for each token p;."}, {"section_index": "6", "section_name": "4.1 EVALUATING WORD-CHARACTER GATING ON TWITTER", "section_text": "We evaluate the effectiveness of our word-character fine-grained gating mechanism on a social medi\ntag prediction task. We use the Twitter dataset and follow the experimental settings in| Dhingra eta\nb). We also use the same network architecture upon the token representations, which is at\nLSTM layer followed by a softmax classification layer (Dhingra et al.|{2016b). The Twitter datase\nconsists of English tweets with at least one hashtag from Twitter. Hashtags and HTML tags have\nbeen removed from the body of the tweet, and user names and URLs are replaced with specia\ntokens. The dataset contains 2 million tweets for training, 10K for validation and 50K for testing\nwith a total of 2,039 distinct hashtags. The task is to predict the hashtags of each tweet.\nWe compare several different methods as follows. Word char concat uses the concatenation of\n\nword-level and character-level representations as in (2016a); word char feat concat\n\nconcatenates the word-level and character-level representations along with the features described in\nwhere q, can be viewed as a gate to filter the information in p;. We then use an attention mechanism\nover I;; to output hidden states h; as follows\nWe first present experimental results on the Twitter dataset where we can rule out the effects of\ndifferent choices of network architectures, to demonstrate the effectiveness of our word-character\nfine-grained gating approach. Later we show experiments on more challenging datasets on reading\ncomprehension to further show that our approach can be used to improve the performance on high-\nlevel NLP tasks as well.\nTable 1: Performance on the Twitter dataset. \u201cword\u201d and \u201cchar\u201d means using word-level and character-level\nrepresentations respectively.\nModel Precision@1 Recall@10 Mean Rank\n\nword (Dhingra et al. et al. 0.241 0.428 133\nchar (Dhingra et al, 0.284 0.485 104\nword char concat 0.2961 0.4959 105.8\nword char feat concat 0.2951 0.4974 106.2\nscalar gate 0.2974 0.4982 104.2\n\nfine-grained gate 0.3069 0.5119 101.5\nTable 2: Performance on the CBT dataset. The \u201cGA word char concat\u201d results are extracted from|Dhingr:\n(2016a). Our results on fine-grained gating are based on a single model. \u201cCN\u201d and \u201cNE\u201d are two widely\nused question categories. \u201cdev\u2019\u2019 means development set, and \u201ctest\u201d means t t.\nModel CNdev CNtest NEdev NE test\nGA word char concat 0.731 0.696 0.768 0.725\nGA word char feat concat 0.7250 0.6928 0.7815 0.7256\nGA scalar gate 0.7240 0.6908 0.7810 0.7260\nGA fine-grained gate 0.7425 0.7084 0.7890 0.7464\nFG fine-grained gate 0.7530 0.7204 0.7910 0.7496\n0.721 0.692 0.752 0.686\n0.715 0.674 0.753 0.697\n0.722 0.694 0.778 0.720\n0.743 0.719 0.782 0.732\n0.711 0.689 0.762 0.710\n0.741 0.710 0.769 0.720\n0.736 0.706 0.766 0.718\nSection 3.2; scalar gate uses a scalar gate similar to|Miyamoto & Cho|( ) but is conditioned or\n\nthe features; fine-grained gate is our method described in Section 3.2. We include word char fea\nconcat for a fair comparison because our fine-grained gating approach also uses the token features.\nThe results are shown in Table|1} We report three evaluation metrics including precision@ 1, re-\ncall@ 10, and mean rank. Our method outperforms character-level models used in|Dhingra et al.\nby 2.29%, 2.69%, and 2.5 points in terms of precision, recall and mean rank respectively\nWe can observe that scalar gating approach can only marginally improve\nover the baseline methods, while fine-grained gating methods can substantially improve model per-\nformance. Note that directly concatenating the token features with the character-level and word-level\nrepresentations does not boost the performance, but using the token features to compute a gate (as\ndone in fine-grained gating) leads to better results. This indicates that the benefit of fine-grained\ngating mainly comes from better modeling rather than using additional features."}, {"section_index": "7", "section_name": "1.2 PERFORMANCE ON READING COMPREHENSION", "section_text": "We evaluate our model on cloze-style question answering benchmarks.\nAfter investigating the effectiveness of the word-character fine-grained gating mechanism on the\nTwitter dataset, we now move on to a more challenging task, reading comprehension. In this section,\nwe experiment with two datasets, the Children\u2019s Book Test dataset (Hill et al.|/2016) and the SQUAD\n\ndataset (Rajpurkar et al.\nTable 3: Performance on the Who Did What dataset. \u201cdev\u201d means development set, and \u201ctest\u201d? means test set.\n\u201cWDW-R\u201d is the relaxed version of WDW.\nTable 4: Performance on the SQUAD dev set. Test set results are included in the brackets.\nThe Children\u2019s Book Test (CBT) dataset is built from children\u2019s books. The whole dataset has\n669,343 questions for training, 8,000 for validation and 10,000 for testing. We closely follow the\n\nsetting in|Dhingra et al.](2016a) and incrementally add different components to see the changes in\nperformance. For the fine-grained gating approach, we use the same hyper-parameters as in\n\n(2016a) except that we use a character-level GRU with 100 units to be of the same size as the\nord lookup table. The word embeddings are updated during training.\nIn addition to different ways of combining word-level and character-level representations, we also\ncompare two different ways of integrating documents and queries: GA refers to the gated attention\nreader (Dhingra et al.|/2016a) and FG refers to our fine-grained gating described in Section[3.3}\nThe results are reported in Table [2] We report the results on common noun (CN) questions anc\n1amed entity (NE) questions, which are two widely used question categories in CBT. Our fine\nsrained gating approach achieves new state-of-the-art performance on both settings and outperform:\nhe current state-of-the-art results by up to 1.76% without using ensembles. Our method outperform:\nhe baseline GA reader by up to 2.4%, which indicates the effectiveness of the fine-grained gating\nmechanism. Consistent with the results on the Twitter dataset, using word-character fine-grainec\nZating can substantially improve the performance over concatenation or scalar gating. Furthermore\nwe can see that document-query fine-grained gating also contributes significantly to the final results\nWDW test WDW-Rdev WDW-R test\n\n0.570 - 0.590\n0.640 - 0.650\n0.662 0.670 0.667\n0.712 0.726 0.726\n\nthis paper 0.723 0.717 0.731 0.726\nModel Fl Exact Match\n\nGA word 0.6695 0.5492\nGA word char concat 0.6857 0.5639\nGA word char feat concat 0.6904 0.5711\nGA scalar gate 0.6850 0.5620\nGA fine-grained gate 0.6983 0.5804\nFG fine-grained gate 0.7125 0.5995\nFG fine-grained gate + ensemble _ 0.7341 (0.733) 0.6238 (0.625)\n(2016) 0.712 (0.710) 0.625 (0.625)\n\nWang & Jiang|(2016) 0.700 (0.703) 0.591 (0.595)\nWe also apply our fine-grained gating model to the Who Did What (WDW) dataset (Onishi et al.|\n2016). As shown in Table[3] our model achieves state-of-the-art results compared to strong baselines.\nWe fix the word embeddings during training.\nThe Stanford Question Answering Dataset (SQUAD) is a reading comprehension dataset collected\nrecently (Rajpurkar et al.[2016). It contains 23,215 paragraphs come from 536 Wikipedia articles.\nUnlike other reading comprehension datasets such as CBT, the answers are a span of text rather than\na single word. The dataset is partitioned into a training set (80%, 87,636 question-answer pairs), a\ndevelopment set (10%, 10,600 question-answer pairs) and a test set which is not released.\nFigure 4: Visualization of gate values in the text. Red means high and yellow means low. High gate values\nfavor character-level representations, and low gate values favor word-level representations.\nWe report our results in Table [4] \u201cExact match\u201d computes the ratio of questions that are answered\ncorrectly by strict string comparison, and the Fl score is computed on the token level. We can\nobserve that both word-character fine-grained gating and document-query fine-grained gating can\nsubstantially improve the performance, leading to state-of-the-art results among published papers.\nNote that at the time of submission, the best score on the leaderboard is 0.716 in exact match and\n0.804 in F1 without published papers. A gap exists because our architecture described in Section\ndoes not specifically model the answer span structure that is unique to SQUAD. In this work, we\nfocus on this general architecture to study the effectiveness of fine-grained gating mechanisms.\nWe visualize the model parameter W, as described in Section|3.2} For each feature, we average th\ncorresponding weight vector in Wy. The results are described in Figur We can see that namec\nentities like \u201cOrganization\u201d and noun phrases (with tags \u201cNNP\u201d or \u201cNNPS\u201d) tend to use character\nlevel representations, which is consistent with human intuition because those tokens are usually\ninfrequent or display rich morphologies. Also, DOCLEN-4, WH-adverb (\u201cWRB\u201d), and conjunctiot\n(\u201cIN\u201d and \u201cCC\u2019\u2019) tokens tend to use word-level representations because they appear frequently.\nWe also sample random span of text from the SQUAD dataset, and visualize the average gate values\nin Figure[4] The results are consistent with our observations in Figure[3] [3] Rare tokens, noun phrases.\nand named entities tend to use character-level representations, while others tend to use word-level\nrepresentations. To further justify this argument, we also list the tokens with highest and lowest gate\nvalues in Tabl\nFigure 3: Visualization of the weight matrix W,. Weights for each features are averaged. Red means high and\nyellow means low. High weight values favor character-level representations, and low weight values favor word-\nlevel representations. \u201cOrganization\u201d, \u201c\u2018Person\u2019\u201d, \u201cLocation\u201d, and \u201cO\u201d are named entity tags; \u201cDOCLEN-n\u201d\nare document frequency features (larger n means higher frequency, n from 0 to 4); others are POS tags.\nGate values Word tokens\nWe present a fine-grained gating mechanism that dynamically combines word-level and character:\nlevel representations based on word properties. Experiments on the Twitter tag prediction datase\nshow that fine-grained gating substantially outperforms scalar gating and concatenation. Our methoc\nalso improves the performance on reading comprehension and achieves new state-of-the-art result:\non CBT and WDW. In our future work, we plan to to apply the fine-grained gating mechanism fo!\ncombining other levels of representations, such as phrases and sentences. It will also be intriguing\nto integrate NER and POS networks and learn the token representation in an end-to-end manner."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was funded by NVIDIA, the Office of Naval Research Scene Understanding grant\nN000141310721, the NSF grant IIS 1250956, and Google Research."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning t\u00a2\nalign and translate. In JCLR, 2015.\nDandi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail readings\ncomprehension task. In ACL, 2016.\nYiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over-attention neural\nnetworks for reading comprehension. arXiv preprint arXiv: 1607.04423, 2016.\nBhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Gated-attentior\nreaders for text comprehension. arXiv preprint arXiv: 1606.01549, 2016a.\nBhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, and William W Cohen. Tweet2ve\nCharacter-based distributed representations for social media. In ACL, 2016b.\nFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children\u2019s\nbooks with explicit memory representations. In JCLR, 2016.\nTable 5: Word tokens with highest and lowest gate values. High gate values favor character-level representa-\ntions, and low gate values favor word-level representations.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman,\nand Phil Blunsom. Teaching machines to read and comprehend. In NIPS, pp. 1693-1701, 2015.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780\n1997.\nRudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum\nreader network. In ACL, 2016.\nYoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural language models. Ir\nAAAI, 2016.\nMinh-Thang Luong and Christopher D Manning. Achieving open vocabulary neural machine translation wit\nhybrid word-character models. In ACL, 2016.\nTsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. arXiv preprint arXiv: 1607.04315, 2016.\nTakeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Who did what: A large-scale\nperson-centered cloze dataset. arXiv preprint arXiv: 1608.05457, 2016.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine\ncomprehension of text. In EMNLP, 2016.\nMarek Rei, Gamal KO Crichton, and Sampo Pyysalo. Attending to characters in neural sequence labeling\nmodels. arXiv preprint arXiv: 1611.04361, 2016.\nAlessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machine\nreading. arXiv preprint arXiv: 1606.02245, 2016.\nAdam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the\nepireader. In EMNLP, 2016.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NJPS, pp. 2692-2700, 2015.\nZhilin Yang, Ye Yuan, Yuexin Wu, Ruslan Salakhutdinov, and William W Cohen. Review networks for captior\ngeneration. In NIPS, 2016b.\nYang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end answer chunk extraction\nand ranking for reading comprehension. arXiv preprint arXiv: 1610.09996, 2016.\nYasumasa Miyamoto and Kyunghyun Cho. Gated word-character recurrent language model. In EMNLP, 2016\nShuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprin\narXiv: 1608.07905, 2016.\nfuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On multiplicative inte\ngration with recurrent neural networks. In N/PS, 2016.\nhan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Learning multi-relational semantic\nusing neural-embedding models. In NJPS 2014 workshop on Learning Semantics, 2014."}]
ryUPiRvge
[{"section_index": "0", "section_name": "EXTRAPOLATION AND LEARNING EQUATIONS", "section_text": "Georg Martius & Christoph H. Lampert"}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "The quality of a model is typically measured by its ability to generalize from a training set to\npreviously unseen data from the same distribution. In regression tasks generalization essentially\nboils down to interpolation if the training data is sufficiently dense. As long as models are selected\ncorrectly, i.e. in a way to not overfit the data, the regression problem is well understood and can -\nat least conceptually \u2014 be considered solved. However, when working with data from real-world\ndevices, e. g. controlling a robotic arm, interpolation might not be sufficient. It could happen that\nfuture data lies outside of the training domain, e. g. when the arm is temporarily operated outside\nof its specifications. For the sake of robustness and safety it is desirable in such a case to have a\nregression model that continues to make good predictions, or at least does not fail catastrophically.\nThis setting, which we call extrapolation generalization, is the topic of the present paper.\nWe are particularly interested in regression tasks for systems that can be described by real-valued\nanalytic expression, e. g. mechanical systems such as a pendulum or a robotic arm. These are typically\ngoverned by a highly nonlinear function but it is nevertheless possible, in principle, to infer thei\nbehavior on an extrapolation domain from their behavior elsewhere. We make two main contributions\n1) a new type of network that can learn analytical expressions and is able to extrapolate to unseer\ndomains and 2) a model selection strategy tailored to the extrapolation setting.\nThe following section describes the setting of regression and extrapolation. Afterwards we introduce\nour method and discuss the architecture, its training, and its relation to prior art. We present our\nresults in the Section Experimental evaluation and close with conclusions."}, {"section_index": "2", "section_name": "REGRESSION AND EXTRAPOLATION", "section_text": "We consider a multivariate regression problem with a training set {(1,41),-..,(@w,yn)} witl\nx \u20ac R\",y \u20ac R\u201d. Because our main interest lies on extrapolation in the context of learning thi\ndynamics of physical systems we assume the data originates from an unknown analytical function (0\nsystem of functions), \u00a2 : R\u201d \u2014 R\u201d with additive zero-mean noise, \u20ac, i.e. y = o(a) + \u20ac and EE = (\nThe function \u00a2 may, for instance, reflect a system of ordinary differential equations that govern th\nmovements of a robot arm or the like. The general task is to learn a function y : R\u201d \u2014 R\"\u2122 tha\napproximates the true functional relation as well as possible in the squared loss sense, i. e. achieve:\nminimal expected error E||z)(x) \u2014 \u00a2(z)||?. In practice, we only have particular examples of the\nfunction values available and measure the quality of predicting in terms of the empirical error or"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "In classical machine learning, regression is treated as a black box process of\ndentifying a suitable function from a hypothesis set without attempting to gair\nnsight into the mechanism connecting inputs and outputs. In the natural sciences\n1owever, finding an interpretable function for a phenomenon is the prime goal as i\nillows to understand and generalize results. This paper proposes a novel type ot\n\u2018unction learning network, called equation learner (EQL), that can learn analytical\n>xpressions and is able to extrapolate to unseen domains. It is implemented as ar\nend-to-end differentiable feed-forward network and allows for efficient gradient\nyased training. Due to sparsity regularization concise interpretable expressions car\nye obtained. Often the true underlying source expression is identified.\nIf training and test data are sampled from the same distribution then we speak about an interpolatio1\nproblem. In the extrapolation setting the training data is assumed to cover only a limited range of th\ndata domain. In the example of the robot arm, for instance, the training may be restricted to a certait\njoint angle range or maximal velocity. For testing we want to make predictions about the unseet\ndomains, e. g. for higher velocities. To succeed in this task, it is essential to identify the underlyin;\nfunctional relationship instead of just minimizing the empirical error, as detailed below. As usual, w\nsplit the data that is available at training time into a part for model training and a part for validatiot\nor model selection."}, {"section_index": "4", "section_name": "LEARNING A NETWORK FOR FUNCTION EXTRAPOLATION", "section_text": "The linear mapping at level | maps the k\u2019-dimensional input y\u2018/~!) to the d-dimensional intermediat\u00ab\nrepresentation z given by\nwhere y\u2018/\u2014)) is the output of the previous layer, with the convention y\u00b0) = x. The weight matrix\nW \u00a9 R@**\" and the bias vector b \u20ac IR@ are free parameters that are learned during training. The\nnon-linear transformation contains u unary units, f; : R + R, fori = 1,...,u, and v binary units,\ngq; : Rx<R- R for 7 = 1,....v. Their outputs are concatenated to form the layer output\nl l l l l l\ny = (A041), fale?) ful). (ea 9 521 Go( 2h ona Aha0))\nfila sin(z;)\n\ncos(z;)\nsigm(:)\nG5 (Zupaj\u20141s Zu+2; Zut2j\u2014-1* Zu42j\ny = WHyED 4 9),\np(p) - LS\n~ will\u2019.\nThe main model we propose is a multi-layered feed-forward network with computational units\nspecifically designed for the extrapolation regression tasks. For an L-layer network, there are L \u2014 1\nhidden layers, each consisting of a linear mapping followed by non-linear transformations. For\nsimplicity of notation, we explain the network as if each hidden layer had the same structure (k\u2019\ninputs, / outputs). In practice, each layer can be designed independently of the others, of course, as\nlong as input/output dimensions match.\n2) = WOylK-D 4 pO,\nIn total, the nonlinear stage has k = w+ v outputs and d = u + 2v inputs. The unary units, fi,..., fu\nreceive the respective component, z1,..., 2, aS inputs, and each unit may be one of the following\nbase functions as specified in a fixed type parameter J; \u20ac {0,1, 2,3}\n(zi) =\n\ni\nsin(z;)\ncos(z;)\nsigm(z;)\n\nif J; = 0,\n\nif I, =1,\nif I, = 2,\nif I, =3,\nwhere sigm(z) = >>2=> 7 \u2014; is the standard sigmoid function. The binary units, gi,...,g, receive the\nremaining component, 2,41,-.-,Zu+2v, aS input in pairs of two. They are multiplication units that\ncompute the product of their two input values:\n[he architecture is depicted in Fig. [i] We call the new architecture Equation Learner (EQL) and\nJenote the function it defines by u\nFigure 1: Network architecture of the proposed Equation Learner (EQL) for 3 layers (Z = 3) and on\nneuron per type (u = 4,v = 1).\nThe proposed network architecture differs in two main aspects from typical feed-forward networks\nthe existence of multiplication units and the possibility of sine and cosine as nonlinearities for the\nunary units. Both design choices are motivated by our objective of learning a system of equations\nthat govern a physical system and can extrapolate to new parts of the input space.\nSigmoid nonlinearities are the canonical choice of activation function for artificial neural networks\n(ANN) and proved to be successful. In fact, we include sigmoids in our architecture, making it a\nsuper class of ANNs. However, they were typically disabled by the training procedure corresponding\nto their absence in the considered physical equations. Other, predominantly local nonlinearities, in\nparticular radial basis functions |Broomhead & Lowe] (1988) we do not include, since one cannot\nexpect them to extrapolate at all. Further nonlinearities, such as (square) roots and logarithms, could\nin principle be useful for learning physical equations, but they pose problems because their domains\nof definition is restricted to positive inputs. We leave the task of incorporating them in a principled\nway to future work.\nThe ability to multiply two values is a second crucial component of our network architecture. Again, it\nis inspired by the typical form of physical equations, where multiplication of components is arguably\nsecond common basic operation after addition (which the linear layers can perform). Multiplication\nwas introduced into neural networks long ago as product-units [Durbin & Rumelhart] (1989) and\nPi-Sigma-unit (1991). The product-units have large fan-in that compute products over\nall their inputs, potentiated by the respective weights. The result is typically the behavior of a high\norder polynomial, which are powerful function approximators, but rarely occur in physical equations\nPolynomials are also known to require careful fine-tuning in order not to overfit, which makes them a\nrisky choice for the purpose of extrapolation. The Pi-Sigma units are multiplication units with a fixed\nnumber of factors and our multiplication units are a special for 2 factors. We find that multiplying\njust two values at a time is well adjusted to the task we aim at, as it allows to control the maximal\ndegree of the learned polynomial by the depth of the network.\nFinally, each layer of the network contains unary units that act as identity maps, which in particular\ngives the network the option to learn functions with smaller number of nonlinearities than the total\nnetwork depths.\nThe EQL is fully differentiable in its free parameters 6 = {W\u201c),..., WB... ,b)}, whict\nallows us to train it in an end-to-end fashion using back-propagation. We adopt a Lasso-lik\u00ab\n\nobjective (1996),\n|D|\n\nL\u00a3(D) = FY Wed al? ad 0h,\n[OO O8|\n\nw\u00ae\n\n(all-to-all)\n\nS\n\neeeats\n\nSS\n2\nS\n3\n\nHob\nthat is, a linear combination of L loss and L regularization, and apply a stochastic gradient descent\nalgorithm with mini-batches and Adam{Kingma & Ba|(2015) for calculating the updates:\n9141 = 6, + Adam (\u201coe \u00b0)\ni\nwhere D(;) denotes the current mini-batch and a is the stepsize parameter. The choice of Adam is\nnot critical and standard stochastic gradient descent also works. In all numerical experiments we use\na = (0.001 and a mini-batch size of 20.\nThe role of the L, regularization is to encourage networks with sparse connections, matching th\nintuition that a typical formula describing a physical system contains only a small number of term\neach operating only on a few variables. However, in a non-convex setting where local minima ar\nlikely to occur, this type of regularization can have an undesirable side-effect: during the cours\nof the optimization the weights hardly ever change their sign. The reason is that the regularizatio\nleads to a constant rate of weight decay whereas the counteracting derivative with respect to th\nsquare loss is proportional to the backpropagated error signal and the input to the unit. The latte\ncontributions are often smaller along paths with small weights, such that many weights go to zer\nand stay there. Additionally, any non-zero regularization term causes the learned weights to reflec\na trade-off between minimizing the loss and the regularizer. Although, this can lead to improve\nseneralization, it also results in a systematic underestimation of the function values.\nTherefore, we follow a hybrid regularization strategy: at the beginning of the training procedure\n(t < t,) we use no regularization (A = 0), such that parameters can vary freely and reach reasonable\nstarting points. Afterwards, we switch on the regularization by setting \\ to a nonzero value, which has\nthe effect that a sparse network structure emerges. Finally, for the last steps of the training (t > ta\nwe disable L; regularization (A = 0) but enforce the same Lo norm of the weights. This is achievec\nby keeping all weights w \u20ac W!--\u00a5 that are close to 0 at 0, i-e. if |w| < 0.001 then w = 0 during\nthe remaining epochs. This ensures that the learned model finds not only a function of the right\nparametric form, but also fits the observed values as closely as possible. We observed that the exact\nchoice of breakpoints t, and f2 is not critical. In practice, we use ty = $0 and ty = RT, where\nT is total number of update steps. T was selected large enough to ensure convergence. Note, that\nconvergence to a sparse structure is important here, so early stopping will be disadvantageous."}, {"section_index": "5", "section_name": "MODEL SELECTION FOR EXTRAPOLATION", "section_text": "EQL networks have a number of hyper-parameters, e. g. the number of layers, the number of unit:\nand the regularization constant. Unfortunately, standard techniques for model selection, such a:\nevaluation on a hold-out set or cross-validation, will not be optimal for our purpose, since they rely or\ninterpolation quality. In order to extrapolate the network has to find the \u201cright\u201d formula. But how car\nwe tell? Using Occams razor principle: the simplest formula is most likely the right one. Intuitively, 1\nwe have the choice between cos(z) and its truncated power series approximation 1 \u2014 27/2 + \u00ab4/24\nthe first one is preferred. We use the number of active hidden units in the network as a proxy fo!\nthe complexity of the formula, see Appendix A1 for details. One could also think of differentiating\nbetween the unit types. In any case, this argumentation is only correct if the model explains the dat\nwell, i.e. it has a low validation error. So we have a dual objective to minimize, which we solve b}\nranking the instances w. r. t. validation error and sparsity and select the one with the smallest Lz norn\n(in rank-space), see Eq. (\nFurthermore, the optimization process may only find a local optimum of the training objective\nwhich depends on the initialization of the parameters. We use independent runs to quantify expectec\nperformance deviations."}, {"section_index": "6", "section_name": "RELATED WORK", "section_text": "In the field of machine learning, regression is often treated as a black box process of identifying\na suitable real-valued function from a hypothesis set, e. g. a reproducing kernel Hilbert space fot\n\nGaussian Processes Regression (GPR)|Williams & Rasmussen (2006 or Support Vector Regression\n(SVR) |Smola & Sch\u00e9lkopf] (2004), or a multi-layer network of suitable expressive power [Specht\n\n(1991). The goal is to find a prediction function that leads to a small expected error on future data, not\nnecessarily to gain insight into the mechanism of how the output values derive from the inputs. The\ngoal of finding an interpretable function is rather common in the natural sciences, such as biology\nwhere high noise levels and strong inter-system variability often make it important to rely on external\nprior knowledge, and finding a \u201cbiologically plausible\u201d model is often preferable over finding one that\nmakes the highest prediction accuracy. As a consequence, model classes are often highly constrained\ne. g. allowing only for sparse linear models.\nThe task of learning a true, nonlinear, functional dependence from observing a physical system\nhas received little attention in the machine learning literature so far, but forms the basis of the field\nof system identification. There, typically the functional form of the system is known and only the\nparameters have to be identified. Another approach is to model the time evolution with autoregressive\n\nmodels or higher order convolution integrals (Volterra series) but learning analytic formulas is not\ncommon.\nCausal learning 1s an area of recent researcn that aims at identilying a causal relation between multiple\nobservables, which are typically the result of a physical process. Classically, this tasks reduces to\nfinding a minimal graphical model based only on tests of conditional independenc O00)\nAlthough very successful in some fields, this classical approach only provides a factorization 0\u2019\nproblem, separating causes and effects, but it leaves the exact functional dependency unexplained\nRecent extensions of causal learning can take a functional view, but typically do not constrain the\nregression functions to physically plausible ones, but rather constrain the noise distributions |Peters\net al.|(2014). The topic of learning a regression function with emphasis on extrapolation performance\nhas not been studied much in the literature so far. Existing work on time series prediction deals\nwith extrapolation in the temporal domain, i.e. predict the next value(s) (19%). By our\nnomenclature, this is typically rather an interpolation task, when the prediction is based on the\n\nbehaviour of the series at earlier time steps but with similar value distribution|Miiller et al.](1997):\nGy\u00e9rfi et al.| (2013). Extrapolating in the data domain implies that the data distribution at prediction\n\ntime will differ from the data distribution at training time. This is traditionally called the domain\nadaptation setting. In particular, since we assume a common labeling function, our setting would fall\nunder the covariate shift setting |\\Quionero-Candela et al. (2009) . Unfortunately, this connection is\nnot particularly useful for our problem. As domain adaptation typically does not make additional\nassumptions about how the data distribution may change, existing methods need access to some\n\nunlabeled data from the test distribution already at training time|Ben-David et al.](2010). In out\n\nsetting this is not possible to obtain.\n\nE\nD\nOn the technical level, EQL networks are an instance of general feed-forward networks for function\nmao (1995). In contrast to recent trends towards deep learning (2009):\n(2013), our goal is not to learn any data representation, but to learn a function whic\ncompactly represents the input-output relation and generalizes between different regions of the\ndata space, like a physical formula. Structurally, EQL networks resemble sum-product networks\n(SPNs) [Poon & Domingos] (2012) and Pi-Sigma networks (PSNs)|Shin & Ghosh| in the sense\nthat both are based on directed acyclic graphs with computational units that allows for summation\nand multiplication. Otherwise, SPNs are different as they act as efficient alternative to probabilistic\ngraphical models for representing probability distributions, whereas EQL networks are meant for\nthe classical task of function approximation. In PSNs each output needs to be passed through\nmultiplicative units, whereas in EQL multiplication is optional.\nFinding equations for observations is also known as symbolic regression where a search is performed\nin a certain function space, typically done with evolutionary computation. With these techniques it\nis possible to discover physical laws such as invariants and conserved quantities|Schmidt & Lipson\n\n. Unfortunately, the computational complexity/search time explodes for larger expressions\nand high-dimensional problems. We attempt to circumvent this by modeling it as a gradient based\noptimization problem. Related to symbolic regression is finding mathematical identities for instance\nto find computationally more efficient expressions. In|Zaremba et al.| (2014) this was done using\nmachine learning to overcome the potentially exponential search space."}, {"section_index": "7", "section_name": "EXPERIMENTAL EVALUATION", "section_text": "We demonstrate the ability of EQL to learn physically inspired models with good extrapolatior\nquality by experiments on synthetic and real data. For this, we implemented the network training anc\nTable 1: Numeric results on pendulum dataset. Reported are the mean and standard deviation of the\nroot mean squares error (RMS) (VE, Eq. (1p) on different test sets for 10 random initializations.\nPendulum. We first present the results of learning the equations of motion for a very simple\nphysical system: a pendulum. The state space of a pendulum is X = R x R where the first value is\nthe angle of the pole in radians and the second value is the angular velocity. In the physics literature\nthese are usually denoted as (6, w), but for our purposes, we call them (21, x2) in order to keep the\nnotation consistent between experiments. The pendulum\u2019s dynamic behavior is governed by the\nfollowing two ordinary differential equations:\nty, =2\n\nand\n\na2 = \u2014gsin(x1).\nWe divide each equation by 9) in order to balance the output scales and form a regression problen\nwith two output values, y; = try and y2 = \u2014sin(21).\nAs training data, we sample 1000 points uniformly in the hypercube [\u2014h, h] x [\u2014h,h] for h =|\nNote that this domain contains more than half of a sine period, so it should be sufficient to identif\nthe analytic expression. The target values are disturbed by Gaussian noise with standard derivatio\na = 0.01. We also define three test sets, each with 1000 points. The interpolation test set i\nsampled from the same data distribution as the training set. The extrapolation (near) test set contain\ndata sampled uniformly from the data domain [\u20143h, 3h] x [\u20143h, 3h] \\ [\u2014h, h] x [\u2014h, h], which i\nrelatively near the training region and the extrapolation (far) test set extends the region to furthe\noutside: [\u20142h, 2h] x [\u20142h, 2h] \\ [\u2014h, h] x [\u2014h, h]. We train a 2-layer EQL and perform model selectio\namong the hyper-parameters: the regularization strength \\ \u20ac 10\u00a3~7~6:3.-6\u20145:3,~5.-4.3,-4,-3.3,-3\nand the number of nodes +u = v \u20ac {1,3,5}. All weights are randomly initialized from a norm\u00e9\ndistribution with o = \\/1/(k\u2019 +d). The unit selection J is set such that all unit types are equall\noften. To ensure convergence we chose T = 10000 epochs. We compare our algorithm to a standar\nmultilayer perceptron (MLP) with tanh activation functions and possible hyperparameters: \\ a\nfor EQL, number of layers L \u20ac {2,3}, and number of neurons k \u20ac {5, 10,20}. A second baselin\nis given by epsilon support vector regression (SYR) Basak etal] (2 with two hyperparameter\nC \u20ac 10f-3-2.-1.01,2,33-5} and \u00a2 \u20ac 10{-3-2,-1} using radial basis function kernel with widt\n7 \u20ac {0.05, 0.1, 0.2, 0.5, 1.0}.\nNumeric results are reported in Tab.|1| As expected all models are able to interpolate well with a\ntest error on the order of the noise level (\u00a2 = 0.01). For extrapolation however, the performance\ndiffer between the approaches. For MLP the prediction quality decreases quickly when leaving the\ntraining domain. SVR remains a bit better in the near extrapolation but also fails catastrophically\non the far extrapolation data. EQL, on the other hand, extrapolates well, both near and far away\nfrom the training domain. The reasons can be seen in Figure[2} while the MLP and SVR simply\nlearns a function that interpolates the training values, EQL finds the correct functional expression and\ntherefore predicts the correct values for any input data.\nEQL\nMLP\nSVR\n\ninterpolation\n0.0102 + 0.0000\n0.0138 + 0.0002\n0.0105\n\nextrapol. (near)\n0.012 + 0.002\n0.150 + 0.012\n0.041\n\nextrapol. (far)\n0.016 + 0.007\n\n0.364 4\n0.18\n\n\u00a3 0.036\nDouble pendulum kinematics. The second system we consider real double pendulum where\nthe forward kinematics should be learned. For that we use recorded trajectories of a real double\npendulum|Schmidt & Lipson|(2009). The task here is to learn the position of the tips of the double\npendulum segments from the given joint angles (x1, x2). These positions where not measured such\nthat we supply them by the following formula: y; = cos(21), y2 = cos(x1) + cos(a1 + x2), y3\n\nsin(x1), ys = sin(x) + sin(x, + x2) where (y;, y3) and (y2, ys) correspond to x-y-coordinates of\nthe first and second end-point respectively. The dataset contains two short trajectories. The first\n0.0\n\n\u20140.5\n\n\u20141.0\n\nMLP\nsvR\nEQL\nSystem\nFigure 3: Double pendulum kinematics. (a) training trajectory (in y-space). (b) extrapolation test\ntrajectory (in y-space) with output of a learned EQL instance. (c) slices of output y4 for inputs\n\nZ| = \u00a32 = & for the true system, one of EQL, MLP, and SVR instances. (d) numeric results\nsee Tab. [I] for details. Note, that predicting 0 would yield a mean error of 0.84.\ncovers only part of the domain (input as well as output) and consists of 819 samples where 10% was\nused as validation set (randomly sampled), see Fig. Bfa). The second trajectory corresponds to a\nbehavior with several spins of both pendulum segments such that a much larger domain is covered\nNevertheless the angle values are confined to [\u20147, 7]. We use this trajectory as extrapolation test\nset. The trajectory and the outputs of our method are shown in Fig. tb). The prediction for unseen\ndomains is perfect, which is also illustrated in a systematic sweep, see Fig. Bic). The performance\nof MLP is off already near the training domain. SVR is a bit better, but still does not give usable\npredictions for the test data, see also the root means square error in Fig. B{d).\nModel selection is performed to determine \\ as above, u = v \u20ac {3,5}, (MLP: k \u20ac {5, 10, 20}) and\nlayer number L \u20ac {2,3}.\nRobotic arms. A more complicated task is to learn the forward kinematics of multi-segment robotic\narms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. For\ntraining the arm is controlled by sinusoidal joint target angles with amplitude in [\u2014*/2, 7/2], each\njoint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4.\nand 5 segment arms respectively, with added noise as above. For testing extrapolation performance\nthe amplitude [\u20147, 7] was used. Note that the extrapolation space is much larger than the training\nspace. The task is to predict the coordinates of the end-effector of the arms (kin-3-end, kin-4-end)\nand the coordinates of all segment positions kin-5-all. The numerical results, see Tab.|2| shows that\nour method is able to extrapolate in these cases. Model selection as above with u = v \u20ac {10, 20}.\n(MLP: k \u20ac \u00a310.50%) and laver number I \u20ac {2.3.4}. To illustrate the dependence on the amount of\nFigure 2: Learning pendulum dynamics. (a) slices of outputs y; (left) and ye (right) for inputs\nZ| = X2 = & for the true system equation (Eq.[9) and one of EQL, MLP, SVR instances. The shaded\narea marks the training region and the vertical bars show the size of the near and far extrapolation\ndomain. (b) one of the learned networks. Numbers on the edges correspond to the entries of W and\nnumbers inside the nodes show the bias values b. All weights with |w| < 0.01 and orphan nodes\nare omitted. Learned formulas: y; = 0.10322, y2 = sin(\u201421), which are correct up to symmetry\n\n(1/g = 1.01).\n(d)\n\n| EQL\n\nTest\n\u00b0\u00b0 data\nEQL\noutput\n\nz\n\nextrapolation error 0.0003 4\n\n+ 0.00003 0.584\n\n+ 0.03 0.25\nTable 2: Extrapolation performance for kinematic of robotic arms. See Tab. [I] for details. Standar\ndeviations for 5 random initializations. Interpolation error for all methods is around 0.012 + 0.02\nnoise and the number of available training points we provide a quantification in Appendix A2. In\nshort, increasing noise can be compensated by increasing amount of data to keep the performance.\nLearning complex formula. In order to find out whether EQL can also learn more complicate:\nformulas, we consider three examples with four-dimensional input and one-dimensional output:\ny = 1/3 (sim(721) + sin (27x + 7/8) + Lo \u2014 \u00a93Xq) F-1\ny = Y/3 (sin(w21) + 2g cos(2m21 + 7/4) + 23 \u2014 27) F-2\ny = 1/3((1 + x9) sin(77) + rorers) F3\nThe first equation requires only one hidden layer to be represented. The second equation and thirc\nequation should requires two hidden layers. In particular, F-2 contains a product of x2 and cos anc\nF-3 contains a product of three terms, and we use it to test if our restriction to only pairwise produc\nunits causes problems for more complex target functions. We follow the same procedure as in the\npendulum case for building training and test sets, though with h = 1 as input data range. We use\n10000 points for training set and validation set (90%-10% split) and 5000 points for each of the tes!\nsets. Model selection for EQL is performed as above using the number of layers L \u20ac 2,3, 4. The\nnumber of units is set to fu =v = 10. For the MLP, we select L and \\ from the same set as above\nas well as k \u20ac {10,30}.\nTable [3] shows the numerical results. Again, all methods are able to interpolate, but only EQL\nachieves good extrapolation results, except for equation F-3. There it settles in 9 out of 10 cases\ninto a local minimum and finds only an approximating equation that deviates outside the training\ndomain. Interestingly, if we restrict the base functions to not contain cosine, the algorithm finds the\nright formula. Note, the sparsity of the correct formula is lower than those of the approximation, so\nit should be selected if found. Figure Fig. |4]illustrates the performance and the learned networks\nvisually. It shows one of the model-selected instances for each case. For F-1 the correct formula was\nidentified, so correct predictions can be made even far outside the training region (much further than\nillustrated). For F-2 the network provided us with a surprise, because it yields good extrapolation\nperformance with only one hidden layer! How can it implement x2 cos(2721 + 7/4)? Apparently it\nuses 1.21(cos(\u201427a +7+7/4+0.4122)+sin(2721+7/4+0.4122)) which is a good approximation\nfor x2 \u20ac [\u20142, 2]. The sparsity of this solution is 5 whereas the true solution needs at least 6, which\nexplains its selection. For F-3 the suboptimal local minima uses some strange way of approximating\n(1 + x2) sin(1) using (1 + 2122) cos(8x1), which deviates fast, however the true solution would\nbe sparser but was not found. Only if we remove cosine from the base functions we get always the\ncorrect formula, see Fig. A{c).\nX-Ray transition energies. As a further example we consider data measured in atomic physics\nWhen shooting electron beams onto atoms one can excite them and they consequently emit x-ray\nradiation with characteristic peak energies. For each element/isotope these energies are different as\nthey correspond to the potential difference between the electron shells, such that one can identify\nelements in a probe this way. The data is taken from [Deslattes et al-] (2003), where we consider\none specific transition, called the Kx az line, because it was measured for all elements. The true\nrelationship between atomic number Z and transition energies is complicated, as it involves many\nbody interactions and no closed-form solution exists. Nevertheless we can find out which relationships\nour system proposes. It is known that the main relationship is K a2 \u00ab Z? according to Moseley\u2019s law\nFurther correction terms for elements with larger Z are potentially of higher order. We have data fot\nelements with 10 < Z < 100, which is split into training/validation sets in the range [10, 91] (70/10\ndata points) and extrapolation test set in the interval [92, 100] (14 data points because of isotops)\nSince we have so little data we evaluate the performance for 10 independent training/validation\n|_kin-3-end\n\nkin-4-end kin-5-all\n\nEQL\nMLP | 0.389 4\nSVR | 0.235\n\n0.017 + 0.000\n\nt 0.014\n\n0.012 + 0.000 0.011 + 0.000\n\n0.415 4\n0.590\n\n\u00a3 0.020 0.346 4\n0.260\n\n\u00a3 0.013\ny = 1/3 (sin(wx1) + sin (27x72 + 7/8) + tg \u2014 %3%4)\ny = Y/3 (sin(w21) + 2g cos(2m21 + 7/4) + 23 \u2014 27)\ny = 1/3 ((1 + x) sin(m21) + x7324)\n\nF-l\n\nF-3\n(a) F-1\n\n\u2014 MLP\n\u2014 EQL 094 \u2014\\oss joss\n-\u2014- System Fo.0[0.0)\n\nmult\n\n3.1321) + 0.33 sin(6.282 + 0.39) + 0.33\u00a22 \u2014 0.056 \u2014 0.33ag.\n\n(b) F-2\n1.5\n\u2018 | [\u2014 MLP\n4|\u2014 EQL ban\n= || -- System (0.00.0)\nmul\n\n0 1 2 3\n\nrg =e x4=05e\n\nlearned formula: 0.33 cos(3.1421 + 1.57) + 0.3323 \u2014 0.33224\n0.41 cos(\u20146.2821 + 3.93 + 0.41e2) + 0.41 sin(6.29e1 + 0.79 + 0.4122)\n\n(c) F3\niy \u2014 MIP \u00a9 c : PP\nL\u00e9 \u2014 1\n> 4 \u2014 EAL (no cos) ) Ea \u00ae\n-1\nPreEOoSSoreENN\nNSHSUSUSH\n\nC) P-5\n\n\u2014_ EAL (no cos) bk aI GAN\n\n-- System\n\nLi f on\n3-2 -1 0 1 2 8 ss\n\n@=%=23=2 = \u20140.2r EQL V\n\npapas\nom\nlearned formula (EQL): 0.61(a1 + \u00ab1 #2)(cos(\u20142.3621) + 0.71) + 0.33a2a30.\nlearned formula (EQL (no cos)): 0.33(1 + x2) sin(3.1421) + 0.32\nFigure 4: Formula learning analysis. (a) for F-1, (b) for F-2, and (c) for F-3. (left) y for a single cut\nthrough the input space for the true system equation (IOH12), and for an instance of EQL, and MLP\n(right) shows the learned networks correspondingly, see Fig.|2|for details. The formula representations\nwhere extracted from the networks. For F-3 the algorithm fails with the overcomplete base and\ntypically (9/10 times) ends up in a local minima. With less base function (no cosine) the right formula\nis found. Both results are presented. See text for a discussion.\nTable 3: Interpolation and extrapolation performance for formula learning. See Tab. |T]for detail\ndataset _ method interpolation extrapol. (near) _ extrapol. (far)\n\nF-1 EQL 0.010 + 0.000 0.015+0.005 0.026 + 0.015\nMLP 0.011 + 0.000 0.32 + 0.12 0.920 + 0.420\nSVR 0.011 0.28 1.2\n\nF-2 EQL 0.01 + 0.00 0.013 + 0.004 0.026 + 0.019\nMLP 0.01 + 0.00 0.2 + 0.014 0.49 + 0.043\nSVR 0.011 0.3 0.94\n\nF-3 EQL 0.01 + 0.000 0.047 +0.012 0.35+0.11\nEQL (no cos) | 0.01 + 0.000 0.01 + 0.000 0.011 + 0.001\nMLP 0.01 + 0.000 0.084+ 0.007 0.4+0.021\n\nSVR 0.0 0.071 0.39\n\u2014 MLP\n\n\u2014 EQL\n\n\u2014 EQL (no cos)\n- System\n\nEQL (') EQL (no cos)\"\n2) (cos(\u20142.3621) + 0.71) + 0.33222324\nLet us now go beyond our assumptions and consider cases where the true target function is not ai\nelement of the hypothesis set.\n\u2014a1 \u2014 0.0la3 + x3} sin (x2) + 0.1a4 cos (x2) + 9.81 sin (x2) cos (x2)\nsin? (a2) +1\n\u20140.2a4 \u2014 19.62 sin (x2) + x1 cos (x2) + 0.0123 cos (#2) \u2014 x3 sin (a2) cos (x2)\nsin? (72) +1\n\n\u00a53\n\n\u2019\n\nYA\nThe formulas contain divisions which are not included in our architecture due to their singularitie\nTo incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamic\nis outside the hypothesis class. In this case we cannot expect great extrapolation performance an\nthis is confirmed by the experiments. In Fig. [6{b,c) the extrapolation performance is illustrated b\nslicing through the input space. The near extrapolation performance is still acceptable for both EQ\nand MLP, but as soon as the training region is left further even the best instances differ considerabl\nfrom the true values, see also the numeric results in Tab.|4| The SVR is performing poorly also fc\nnear extrapolation range. Inspecting the learned expressions we find that the sigmoid functions a1\nrarely used.\nKax/100000\n\n(a) (b) (c)\n\n14 am S 001 ar extrapol.\n1B == Ww 2 0.00 = svn valerror error\nLo / g = ca 1 0.05\noe = 001 0.04\n0.6 5 -0.02 0.100 0.03,\nbo 70.03 0.010 \u00b0 im\nNo 02 04 06 08 10 5-004 92 04 06 08 10 0.001 \u00b0\n\nx = Z/100 x= Z/100 s\n\n(d) (e)\n. - - s_| formula\n| interpolation extrapolation 1 | y= 1.282? \u2014 0.183% 40.026\ney , 2 | y = 1.988? \u2014 1.42\u00a2 + 0.618 \u2014 1.45sigm(\u20143.652 \u2014 0.3)\nEQL | 0.00042 0.0061 = 0.0038 3 \u20140.382 4 2.47sigm(\u20142.252 \u2014 2.77) + 0.38\n\nMLP | 0.002\nSVR_ | 0.00067\n\n0.0180 + 0.0024\n0.0057 + 0.0014\n\ny\nwith 2 = cos(2.32\u00ab \u2014 0.08)\n4 | y = 0.2212 + 0.42sigm(0.75z \u2014 3.73)\n\ni he\nFigure 5: X-Ray transition energies. (a) Measured data and predicted values by EQL and (b)\nvisualized prediction error for all methods for one train/validation splitting. (c) EQL solutions during\nmodel selection in validation error \u2014 sparsity space, see Appendix A1 for details. (d) numeric results.\nReported are RMS errors with standard deviation for 10 independent train/validation splits. In real\nunits the error is in 100 keV and is well below the difference between neighboring high-Z elements.\n(e) learned formulas for different sparsities s (lowest dot for each s in (c)).\nsplits. The data is scaled to lie in [0,1], i.e. 2 = Z/100 and y = Kay/100000. Model selection\nis here based on validation error only. The selection for sparsity and validation error only yields\nthe Z? relationship. Mini-batch size is 2 here and T = 50000 was used. Figure [5] presents the\ndata, the predictions, the learned formulas and the numerical results. EQL and SVR achieve similar\nperformance and MLP is significantly worse. However, EQL also yields interpretable formulas, see\nFig.[5{e) that can be used to gain insights into the potential relationship.\nConsider a pendulum attached to a cart that can move horizontally along a rail but that is attached to\na spring damper system, see Fig. [6{a). The system is parametrized by 4 unknowns: the position of\nthe cart, the velocity of the cart, the angle of the pendulum and the angular velocity of the pendulum.\nWe combine these into a four-dimensional vector x = (x%1,..., 24).\nWe set up a regression problem with four outputs from the corresponding system of ordinary\ndifferential equations where y, = #1 = %3, yg = %2 = @4 and\n(a)\n~p\n\n1.0\n0.5\n0.0\n-0.5\n-1.0\n\n1.0\n\n0.5\n\n0.0\n\n-0.5\n\n-1.0\n\n\u2014 MUP\n\u2014 ca\n\n= System\nFigure 6: Cart-pendulum system. (a) sketch of the system. The lengths and masses are set to 1, the\ngravitation constant is 9.81 and the friction constant is 0.01. (b,c) slices of outputs yz and y4 for\ninputs 71 = x2 = x4 = x for the true system equation (Eq.|13), and best EQL, MLP instances.\nTable 4: Interpolation and extrapolation performance for cart-pendulum dynamics. See Tab. [1] fo\ndetails. Note that predicting 0 would yield an error of 0.96 on the far test set."}, {"section_index": "8", "section_name": "CONCLUSIONS", "section_text": "We presented a new network architecture called EQL that can learn analytic expressions that typically\noccur in equations governing physical, in particular mechanical, systems. The network is fully differ\nentiable, which allows end-to-end training using backpropagation. By sequencing L; regularization\nand fixing Lo norm we achieve sparse representations with unbiased estimation of factors within\nthe learned equations. We also introduce a model selection procedure specifically designed to select\nfor good extrapolation quality by a multiobjective criterion based on validation error and sparsity\nThe proposed method is able to learn functional relations and extrapolate them to unseen parts of the\ndata space, as we demonstrate by experiments on synthetic as well as real data. The approach learns\nconcise functional forms that may provide insights into the relationships within the data, as we show\non physical measurements of x-ray transition energies.\nThe optimization problem is nontrivial and has many local minima. We have shown cases where the\nalgorithm is not reliably finding the right equation but instead finds an approximation only, in whict\ncase extrapolation may be poor.\nIf the origin of the data is not in the hypothesis class, i.e. the underlying expression cannot b\u00a2\nrepresented by the network and good extrapolation performance cannot be achieved. Thus it i:\nimportant to increase the model class by incorporating more base functions which we will address i1\nfuture work alongside the application to even larger examples. We expect good scaling capabilities t\nlarger systems due to the gradient based optimization. Apart from the extrapolation we also expec\nimproved interpolation results in high-dimensional spaces, where data is less dense."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was in parts funded by the European Research Council under the European Union\u2019s\nSeventh Framework Programme (FP7/2007-2013)/ERC grant agreement no. 308036: \"Life-long\n\nlearning of visual scene understanding\" (L3ViSU). GM received funding from the People Programme\n(Marie Curie Actions) in FP7/2007-2013 under REA grant agreement no. 291734."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Debasish Basak, Srimanta Pal, and Dipak Chandra Patranabis. Support vector regression. Neura\nInformation Processing-Letters and Reviews, 11(10):203\u2014224. 2007.\n(a)\n\n1.0\n0.5\n0.0\n-0.5\n-1.0\n\n(b)\n\n1.0 MP\n. \u2014 cat\n0.5 = System\n\n0.0\n-0.5\n\n-1.0\ninterpolation\n\nextrapol. (near)\n\nextrapol. (far)\n\nEQL\nMLP\nSVR\n\n0.0103 + 0.0000\n0.0101 + 0.0000\n0.0118\n\n0.0621 + 0.0208\n0.0184 + 0.0008\n0.227\n\n0.180 a\n0.195 a\n0.639\n\nr 0.056\n0.006\nShai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman\nVaughan. A theory of learning from different domains. Machine Learning, 79(1-2):151-175, 2010.\nChristopher M Bishop. Neural networks for pattern recognition. Oxford University Press, 1995\nDavid S Broomhead and David Lowe. Radial basis functions, multi-variable functional interpolatior\nand adaptive networks. Technical report, DTIC Document, 1988.\nLazl6 Gy6rfi, Wolfgang Hirdle, Pascal Sarda, and Philippe Vieu. Nonparametric curve estimation\nfrom time series, volume 60. Springer, 2013.\nJudea Pearl. Causality. Cambridge University Press, 2000.\nHoifung Poon and Pedro M. Domingos. Sum-product networks: A new deep architecture, 2012.\nJoaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Datase\nshift in machine learning. The MIT Press, 2009.\nMichael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. Science\n324(5923):81-85, 2009. ISSN 0036-8075. doi: 10.1126/science.1165893. URL |http://\nscience. sciencemag.org/content/324/5923/81)\nYoan Shin and Joydeep Ghosh. The pi-sigma network : An efficient higher-order neural network\nfor pattern classification and function approximation. In in Proceedings of the International Join\nConference on Neural Networks, pp. 13-18, 1991.\nDonald F. Specht. A general regression neural network. [EEE Transactions on Neural Networks\n(TNN). 2(6):568\u2014576, 1991.\nRobert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical\nSociety. Series B (Methodological). pp. 267-288. 1996.\nNorbert Wiener. Extrapolation, interpolation, and smoothing of stationary time series, volume 2.\nThe MIT Press, 1949.\nK-R Miiller, Alexander J Smola, Gunnar R\u00e9atsch, Bernhard Sch\u00e9lkopf, Jens Kohlmorgen, and Vladimir\nVapnik. Predicting time series with support vector machines. In Artificial Neural Networks\n(ICANN), pp. 999-1004. Springer, 1997.\nAlex J Smola and Bernhard Sch\u00e9lkopf. A tutorial on support vector regression. Statistics and\ncomputing, 14(3):199\u2014222, 2004."}, {"section_index": "11", "section_name": "Al: MODEL SELECTION DETAILS", "section_text": "Me\n\nd.\n\n7\n\nk\nS70} \u00ab| wt| - 0.01),\n\n4\u20141\nwhere Q is the heavyside function and 0.01 is an arbitrary threshold. For the multiplication units the\nnorm of the incoming weights for both inputs are added (omitted to avoid clutter in the formula)."}, {"section_index": "12", "section_name": "SELECTION CRITERIA", "section_text": "As stated in the main text, we strive to choose the model that is both simple and has good performance\nin terms of the validation set. Since both quantities have different scales, we proposed to choose them\nbased on their ranking. Let r\u2019(@) and r*(\u00a2) be the ranks of the network \u00a2 w.r. t. the validation error\nand sparsity s(#)respectively, then the network with minimal squared rank norm is selected:\n(a) (b)\n\nval. error r\u2019\nextrapol.\ni 140 a . error\n0.10! -2\u00b0 120 , 0.10\n100}---.. 0.08\n0.05 . 80k 0.06\n60f 0.04\n0.02 \u00b0\n40 0.02\n0.01 : 20\n. 0\n0 r\n\u2018al. Crror rr\n\n\u2019 extrapc\na 140 Ly i error\nqo} <t\u00b0 120 .\n100|--- 0.\n05 : 80h 0.\n60 0.\n02\n40 0.\nol 204\u00b0\n. 0\n0 \u00e9\nFigure 7: Model selection criteria. (a) extrapolation performance depending on validation error and\nsparsity (s) for the kin-4-end dataset as an illustration. (b) the same as in (a) but in rank-space. Circle\narcs indicate the Ly norm iso-lines."}, {"section_index": "13", "section_name": "A2: DEPENDENCE ON NOISE AND NUMBER OF DATA POINTS", "section_text": "In order to understand how the method depends on the amount of noise and the number of datapoint:\nwe scan through the two parameters and present the empirical results in Fig. B} In general the methoc\nis robust to noise and as expected, more noise can be compensated by more data.\nWe actually want a measure of complexity of the formula, however, since it is not clear what is the\nright choice of a measure, we use the sparsity instead, by counting the number of active/used hidden\nunits denoted by s. For a given network phi we get\narg min [r\u2019($)? + r\u00b0()?]\nad\nIn Fig. |7| the extrapolation performance of all considered networks for the kin2D-4-end dataset\nis visualized in dependence of validation error and the sparsity. It becomes evident that the best\nperforming networks are both sparse and have a low validation error.\nFigure 8: Interpolation performance (a) and extrapolation performance (b) (on the noise-free test set)\ndepending on the number of data points and the size of the additive noise for kin-4-end dataset as an\nillustration. The white line represent an arbitrary threshold below which we consider a successful\nsolution of the interpolation and extrapolation task.\n(a)\n0.01 0.032 01 0.32 056 1.\n(40dB) (30dB) (20dB) (10dB) (5dB) (0dB)\n20000 fy \u2018 _ : + 20000\n\n10000 10000\n\n5000 5000\n\n2\neA)\na.\nH 1000 1000\n\n500\n\n0.01 0.032 01 0.32 056 1.\n(40dB) (30dB) (20dB) (10dB) (5dB) (0dB)\n\na-noise (SNR)\n\ntest error\n\n0.32\n\n0.10\n\n0.03\n\n0.01\n\nHt points\n\n(40dB) (30dB) (20dB) (10dB) (5dB) (0dB)\n\n(b)\n\n0.01 0.032 0.1 0.32 0.56 1.\n\n| 20000\n\nextrapol.\n\nerror\n0.32\n0.10\n0.03\n\n0.01\n\n0.01 0.032 0.1 032 056 1.\n(40dB) (30dB) (20dB) (10dB) (5dB) (0dB)\n\ng-noise (SNR)\n\nsive 2: Tntaenanlatinn narfarmoanra (a\\ and avtrannlatian narfarmoansra (kh) (an the naica fees tact cat)\n#H points\n\n(a)\n0.01 0.032 01 0.32 056 1.\n(40dB) (30dB) (20dB) (10dB) (5dB) (0dB)\n1000 Fy \u2018 _ : #3 201\n\n20 000\n10000 10000 fest error\nBy 0.32\n\n5000 5000\n0.10\n1000 1000 0.03\n0.01\n\n0.01 0.032 01 0.32 056 1.\n(40dB) (30dB) (20dB) (10dB) (5dB) (0dB)\n\ng-noise (SNR)\n\nHt points\n\n(b)\n0.01 0.032 0.1 032 056 1.\n(40dB) (30dB) (20dB) (10dB) (5dB) (0dB)\n000 _ : \" 20000\n\n20\n10000\n5000\n\n1000\n\n500\n\n0.01 0.032 0.1 032 056 1.\n(40dB) (30dB) (20dB) (10dB) (5dB) (0dB)\n\ng-noise (SNR)\n\nextrapol.\n\nerror\n0.32\n0.10\n0.03\n\n0.01"}]
SJCscQcge
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Convolutional neural networks (CNNs) are among the most popular techniques employed for com:\nputer vision tasks, including but not limited to image recognition, localization, video tracking, and\nimage and video segmentation 2016). Though these deep networks have exhibited\ngood performances for these tasks, they have recently been shown to be particularly susceptible to\nadversarial perturbations to the input images (Szegedy et al.| Moosavi-\n\n16b). Vulnerability of these networks to adversarial attacks can lead to undesirable consequences in\nmany practical applications using them. For example, adversarial attacks can be used to subvert fraud\ndetection, malware detection, or mislead autonomous navigation systems (Papernot et al.|\nBOTS. Further strengthening these results is a recent observation by |Kurakin et al\n(2016) who showed that a significant fraction of adversarial images crafted using the original networ!\nare misclassified even when fed to the classifier through a physical world system (such as a camera)\nIn this paper, we investigate the problem of robustness of state-of-the-art convolutional neural\nnetworks (CNNs) to simple black-box adversarial attacks. The rough goal of adversarial attacks\nis as follows: Given an image J that is correctly classified by a machine learning system (say, a\nCNN), is it possible to construct a transformation of J (say, by adding a small perturbation to some\nor all the pixels) that now leads to misclassification by the system. Since large perturbations can\ntrivially lead to misclassification, the attacks seek to limit the amount of perturbation applied under\nsome chosen metric. More often than not, in these attacks, the modification done to the image is\nso subtle that the changes are imperceptible to a human eye. Our proposed attacks also share this\nproperty, in addition to being practical and simplistic, thus highlighting a worrying aspect about lack\nof robustness prevalent in these modern vision techniques."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "dnere are tWO Mal rescareh GICCUONS Hi Wie terature ON aGVeTSsalldl allaCks Ddsea ON GULerent\nassumptions about the adversarial knowledge of the target network. The first line of work assumes\nthat the adversary has detailed knowledge of the network architecture and the parameters resulting\n\nfrom training (or access to the labeled training set) (Szegedy et al.||2014}|Goodfellow et al.}|2015\n[Moosavi-Dezfooli et al.|{2016]|Papernot et al.|/2016c). Using this information, an adversary constructs\n\na perturbation for a given image. The most effective methods are gradient-based: a small perturbation\nis constructed based on the gradients of the loss function w.r.t. the input image and a target label\nOften, adding this small perturbation to the original image leads to a misclassification. In the second\nline of work an adversary has restricted knowledge about the network from being able to only observe\nthe network\u2019s output on some probed inputs 2016b). Our work falls into this category\nWhile this black-box model is a much more realistic and applicable threat model, it is also more\nchallenging because it considers weak adversaries without knowledge of the network architecture.\nparameters, or training data. Interestingly, our results suggest that this level of access and a small\nnumber of queries provide sufficient information to construct an adversarial image.\nTable 1: The top row shows the original images and the bottom row shows the perturbed images. The\nmisclassification is as follows: (a) a stingray misclassified as a sea lion, (b) an ostrich misclassified as\na goose, (c) a jay misclassified as a junco, and (d) a water ouzel misclassified as a redshank.\nAS we operate 1n a black-box setting, we use a gradient-Iree approach to adversarial image generatio!\n(2016b) were the first to discuss a black-box attack against deep learning system:\nTheir attack crucially relies on the observation that there is a transferability (generalization) propert\nin adversarial examples, i.e., adversarial examples form one model transfers to another. Our propose:\nattacks on the other hand is much more simple and direct, does not require this transferability propert\nand hence is more effective in constructing adversarial images, in addition to having some othe\ncomputational advantages. We demonstrate that our method is capable of constructing adversaria\nimages for several network architectures trained on different datasets. In particular in this pape\nwe consider the CIFAR10, MNIST, SVHN, STL10, and ImageNet1000 datasets, and two popula\nnetwork architectures, Network-in-Network (Lin et al.|/2014) and VGG (Simonyan & Zissermar\n(2014). In Table[I] we show four images from the ImageNet1000 dataset. The original images are it\nthe upper row. The bottom row shows the corresponding perturbed images produced by our algorithn\n\nwhich are misclassified by a VGG CNN-S network (Chatfield et al.}|2014a).\nOur Contributions. In this work, we present simple and effective black-box adversarial attacks or\ndeep convolutional neural networks. We make the following main contributions in this paper.\n(1) The first question we investigate is the influence of perturbing a single pixel on the prediction.\nTo do so, we devise a simple scheme, based on randomly selecting a single pixel and applying\na strong perturbation to it. Somewhat surprisingly, we noticed that a few trails of this random\nexperiment is already quite enough in generating adversarial images for low resolution image sets.\nIn fact, in many cases, for misclassification, the amount of perturbation needed to be applied to the\n(2)\n\n(3)\n\n(4)\n\nselected pixel is also quite small. For high-resolution images, a similar phenomena holds, excep\nour scheme now picks a random set of around 50 pixels. These simple experiments show the eas\nof generating adversarial images for modern deep CNNs without knowledge of either the networ!\narchitecture or its parameters. There is however one shortcoming in these approaches in that the\nperturbed image might have pixel values that are outside some expected range.\n\nWe overcome this above shortcoming by showing that lower perturbation suffices if we carefull\nselect the pixels for perturbation. The approach is based the idea of greedy local search, an iterative\nsearch procedure, where in each round a local neighborhood is used to refine the current image anc\nin process minimizing the probability of the network assigning high confidence scores to the truc\nclass label. Again while the algorithm is quite simple, it is rather effective in generating adversaria\nimages with quite small perturbations. We also show an interesting connection between the pixel:\nchosen for perturbation by our approach and the saliency map of an image, as defined by|Simonyat\nfet al.|(2014), that ranks pixels based on their influence on the output score. In effect our approac\nidentifies pixels with high saliency scores but without explicitly using any gradient informatiot\n(as needed in the definition of saliency map (Simonyan et al.|/2014)). Intuitively, in each rounc\nour local-search based approach computes an implicit approximation to the gradient of the curren\nimage by understanding the influence of a few pixels on the output, which is then used to update\nthe current image.\n\nWe perform extensive experimental evaluations, and show that our local-search based approacl\nreliably generates adversarial examples with little perturbation (even when compared to a recen\nelegant adversarial attack proposed by|Goodfellow et al.|(2015) which needs perfect knowledg:\nof the network). Another feature of our attack is that, by design, our approach only perturbs \u00ab\nvery small fraction of the pixels during the adversarial image generation process (e.g., on th\nImageNet1000 dataset we on average perturb only about 0.5% of the pixels per image). Mos\nprevious attacks require the ability to perturb all the pixels in the image.\n\nOur approaches naturally extend to a stronger notion of misclassification (that we refer to a:\nk-misclassification), where the goal is to ensure that the true label of the image does not eve1\nappear in the top-k predictions of the network (obtained by sorting the confidence score vector\nThis notion especially captures the fact that many modern systems (e.g., ImageNet competitior\nentrants) are evaluated based on top-k predictions. To the best of our knowledge, these are the firs\nadversarial attacks on deep neural networks achieving k-misclassification.\nStarting with the seminal paper by |Szegedy et al.|(2014), which showed that the state-of-the-ar\nneural networks are vulnerable to adversarial attacks, there has been significant attention focusec\non this problem. The research has led to investigation of different adversarial threat models anc\n\ncomputationally efficient attacks (Goodfellow et al.|/2015), perturbation efficient attacks (Moosavi\nSzegedy et al. (2014) used a box-constrained L-BFGS technique to generate adversarial examples.\nThey also showed a transferability (or generalization) property for adversarial examples, in that\nadversarial examples generated for one network might also be misclassified by a related network with\npossibly different hyper-parameters (number of layers, initial weights, etc.). However, the need for\na solving a series of costly penalized optimization problems makes this technique computationally\nexpensive for generating adversarial examples. This issue was fixed by |Goodfellow et al.| (2015)\nwho motivated by the underlying linearity of the components used to build a network proposed an\nelegant scheme based on adding perturbation proportional to sign of the network\u2019s cost function\ngradient. Recently, |Moosavi-Dezfooli et al. (2016 used an iterative linearization procedure to\ngenerate adversarial examples with lesser perturbation. Another recent attack proposed by {Papernot\net al. \\\u00a2 2016c) uses a notion of adversarial saliency maps (based on the saliency maps introduced\nby (Simonyan et al.|/2014)) to select the most sensitive input components for perturbation. This attack\nhas been adapted hima ese et al. for generating adversarial samples for neural networks used\nas malware classifiers. However, all these above described attacks require perfect knowledge of the\ntarget network\u2019s architecture and parameters which limits their applicability to strong adversaries\nwith the capability of gaining insider knowledge of the target system.\nOur focus in this paper is the setting of black-box attacks, where we assume that an adversary ha\nonly the ability to use the network as an oracle. The adversary can obtain output from supplied input!\nand use the observed input-output relationship to craft adversarial imaged! | In the context of dee;\nneural networks, a black-box attack was first proposed by with the motivatio:\nof constructing an attack on a remotely hosted system] Their general idea is to first approximate th\ntarget network by querying it for output labels, which is used to train a substitute network, which i\nthen used to craft adversarial examples for the original network. The success of the attack cruciall\ndepends on the transferability property to hold between the original and the substitute network. Ou\nblack-box attack is more direct, and completely avoids the transferability assumption, making it fa\nmore applicable. We also avoid the overhead of gathering data and training a substitute networ!\nAdditionally, our techniques can be adapted to a stronger notion of misclassification.\nA complementary line of work has focused on building defenses against adversarial attacks. Althougt\ndesigning defenses is beyond scope of this paper, it is possible that adapting the previous suggestec\ndefense solutions such as Jacobian-based regularization and distillation (Pa\n[pernot et al.||2016d) can reduce the efficacy of our proposed attacks. Moreover, the recently proposec\ntechnique of differentially private training can also prove beneficial here.\nThe study of adversarial instability have led to development of solutions that seeks to improv\ntraining to in return increase the robustness and classification performance of the network. h\nsome case, Saree tal adversarial ST Ceanallow etal to the training (adversarial IMonsuni-besterti at al set can act like ;\nve nomen cto of adversarial Sate hae or has also been ineorlieally investigated forex investigated ertain familie\nof classifiers under various models of (semi) random noise However, a\nwe discuss later, due to peculiar nature of adversarial images generated by our approaches, a simpl\nadversarial training is only mildly effective in preventing future similar adversarial attacks.\nThe security of machine learning in settings distinct from deep neural networks is also an area of\nactive research with various known attacks under different threat models. We refer the reader to a\n\nrecent survey by/McDaniel et al.](2016) and references therein.\nNotation and Normalization. We denote by [n] the set {1,...,n}. The dataset of images i\npartitioned into train and test (or validation) subsets. An element of a dataset is a pair (I, c(I)) fo\nan image J and a ground truth label c(J) of this image. We assume that the class labels are draw1\nfrom the set {1,...,C}, i.e., we have a set of C \u20ac N possible labels. We assume that images have |\nchannels (in experiments we use the RGB format) and are of width w \u20ac N and height h \u20ac N. We sai\nthat (b, x, y) is a coordinate of an image for channel b and location (x, y), and (x, x, y) is a pixel of a1\nimage where (x, x, y) represents all the \u00a2 coordinates corresponding to different channels at locatiot\n(x,y). I(b,a,y) \u20ac R is the value of J at the (b, x,y) coordinate, and similarly I(x, x,y) \u20ac R\nrepresents the vector of values of J at the (x, x, y) pixel.\nIt is a common practice to normalize the image before passing it to the network. A normalized\nimage has the same dimension as the original image, but differs in the coordinate values. In this\nwork we treat the normalization procedure as an external procedure and assume that all images are\nnormalized. As we always work with normalized images, in the following, a reference to image means\na normalized input image. We denote by LB and UB two constants such that all the coordinates of\nall the normalized images fall in the range [LB, UB]. Generally, LB < 0 and UB > 0. We denote by\nI c R\u2019X\u2019** the space of all (valid) images which satisfy the following property: for every I \u20ac I, for\nall coordinates (b, x,y) \u20ac {\u20ac] x [w] x [h], I(b, x, y) \u20ac [LB, UB].\nWe denote by NN a trained neural network (trained on some set of training images). NN takes an\nimage J as an input and outputs a vector NN(J) = (01,...,0c), where 0; denotes the probability\nas determined by NN that image J belongs to class 7. We denote 7(NN(J),&) a function that\nreturns a set of indices that are the top-k predictions (ranked by decreasing probability scores with\nties broken arbitrarily) of the network NN. For example, if NN(I) = (0.25, 0.1, 0.2, 0.45), then\nm(NN(J), 1) = {4} (corresponding to the location of the entry ie 5). Similarly, (NN(Z),2) =\n{4, 1}, t(NN(Z), 3) = {4, 1, 3}, etc.\nAdversarial Goal. Before we define the goal of black-box adversarial attacks, we define misclassi-\nfication for a NN. In this paper, we use a stronger notion of misclassification, which we refer to as\nk-misclassification for k \u20ac N.\nIn other words, k-misclassification means that the network ranks the true label below at least k other\nlabels. Traditionally the literature on adversarial attacks have only considered the case where k = 1.\nNote that an adversary that achieves a k-misclassification for k > 1 is a stronger adversary than one\nachieving an 1-misclassification (k-misclassification implies k\u2019-misclassification for all 1 < k\u2019 < k).\nIf k = 1, we simply say that NN misclassifies the image.\nIn our setting, an adversary ADV is a function that takes in image J as input and whose output is\nanother image ADv(J) (with same number of coordinates as J). We define an adversarial image as\none that fools a network into k-misclassification.\nAdversarial threat models can be divided into two broad classes}\"| The first class of models roughh\nassumes that the adversary has a total knowledge of the network architecture and the parameter\nresulting from training (or access to the labeled training set). The second class of threat models, a\nconsidered in this paper, make no assumptions about the adversary having access to the networl\narchitecture, network parameters, or the training set. In this case, the adversary has only a black-bo:\n(oracle) access to the network, in that it can query the network NN on an image I and observe th\noutput NN(J). In our experimental section (Section|6p we also consider a slight weakening of thi:\nblack-box model where the adversary has only the ability to use a proxy of the network NN as a1\noracle.\nA black-box threat model in the context of deep neural networks was first considered by|Papernot\net al] (2016b). There is however one subtle difference between the threat model considered here\nand that considered by in what the adversary can access as an output. While\nthe adversary presented in (Papernot et al. ) requires access to the class label assigned by the\nnetwork which is the same level of access needed by our simple randomized adversary (presented in\nSectionfi), our local-search adversary (presented in Section|5) requires access to 0,(z) (the probability\nassigned to the true label c(I) by the network on input J) and the 7 vector (for checking whether\nk-misclassification has been achieved). Our adversarial approaches does not require access to the\ncomplete probability vector (NN(J)). Also as pointed out earlier, compared to\n[2016b), our approach is more direct (needs no transferability assumption), requires no retraining, and\ncan be adapted to achieve k-misclassification rather than just 1-misclassification."}, {"section_index": "2", "section_name": "| BLACK-BOX GENERATION: A FIRST ATTEMPT", "section_text": "Definition 2 (Adversarial Image) Given access to an image I, we say that an ADV(1) is a k-\nadversarial image ap. adversarial image) if c(I) \u20ac \u2122(NN(I), k) and c(I) \u00a2 7(NN(ADV(J)), k)\n(resp. c(I) \u20ac m(NN(JI), 1) and c(I) \u00a2 t(NN(ADv(J)), 1)).\nThe goal of adversarial attacks is to design this function ADV that succeeds in fooling the network for\na large set of images. Ideally, we would like to achieve this misclassificatiorf|by adding only some\nsmall perturbation (under some metric) to the image. The presence of adversarial images shows that\nthere exist small perturbations in input that produce large perturbations at the output of the last layer.\nIn this section, we present a simple black-box adversary that operates by perturbing a single pixel (or\na small set of pixels) selected at random. In the next section, we build upon this idea to construct an\nadversary that achieves better success by making adaptive choices.\nNote that the misclassification is at test time, once the trained network has been d ployed.\n\n\u2018More fine-grained c cation has also been considered in (Papernot et al.|!2016c) where adversaries are\n\ntegorized by the information and capabilities at their disposal.\nPower of One Pixel. Starting point of our investigation is to understand the influence of a singl\npixel in an adversarial setting. Most existing adversarial attacks operate by applying the sam\nperturbation on each individual pixel while minimizing the overall perturbation (Szegedy et al.|/2014\n2016), while recent research have yielded attack\nthat perturb only a fraction of the pixels (Papernot et al.[|2016c[b} Grosse et al.|/2016). However, in a\n\nthese cases, no explicit restriction is placed on the number of pixels that can be perturbed. Therefor\nit is natural to ask: whether it is possible to force the network to misclassify an image by modifyin:\na single pixel? If so, how strong should this perturbation be? We run several experiments to she\nlight on these questions. For simplicity, in this section, we focus the case of 1-misclassification, eve!\nthough all discussions easily extend to the case of k-misclassification for k > 1. We begin with.\nuseful definition.\nDefinition 3 (Critical Pixel) P|Given a trained neural network NN and an image I, a pixel (x, x,y\nin I is a critical pixel if a perturbation of this pixel generates an image that is misclassified by the\nnetwork NN. In other words, (x, x,y) is a critical pixel in I if there exists another neighboring image\nI, which differs from I only in values at the pixel location (x,y) such that c(I) \u00a2 x(NN(J,), 1).\n1)(b, u,v) 2 (I(, u,v) ife Auory F\nP a pxsign(I(b,u,v)) otherwise\nIn the following, we say a pixel (x, x, y) in image J is critical iff c(IT) \u00a2 t(NN(I;\u201d\"\u201d\u201d), 1)\nCritical Pixels are Common. Our first experiment is to investigate existence of critical pixels it\nhe considered dataset of images. To do so, we perform a simple procedure that picks a location (x, y\nn the image J and applies the PERT function to this pixel to obtain a perturbed image J, (eu) Thet\nhe perturbed image is run through the trained network, and we check whether it was misclassified o\n1ot. If the perturbed image ew is misclassified then we have identified a critical pixel. While w\nsan exhaustively repeat this procedure for all pixels in an image, for computational efficiency wi\nnstead perform it only on a fraction of randomly chosen pixels, and our results somewhat surprisingh\nsuggest that in many cases this is sufficient to generate an adversarial image. Algorithm RANDAD\\\nresents the pseudo-code for this experiment. Algorithm RANDADV, selects U random pixels (witl\n\u2018eplacement) and performs checks whether the pixel is critical or not. The algorithm output is a1\ninbiased estimate for the fraction of critical pixels in the input image J. Note that the algorithm ca1\n\u2018ail in generating an adversarial image (i.e., in finding any critical pixel for an image). The followin;\nJefinition will be useful for our ensuing discussion.\nOur first observation is that sometimes even small perturbation to a pixel can be sufficient to obtain an\nadversarial image. Table[2]shows two images and their adversarial counterparts, with p = 1. Often,\noriginal and adversarial images are indistinguishable to the human eye, but sometimes the critical\npixel is visible (Table[2p.\n*In the definition of critical pixel we have not considered how well the original image I is classified by\nNN, i.e., whether c(I) \u20ac 7(NN(J), 1). In particular, if c() \u00a2 7(NN(J), 1) then by definition all pixels in the\nimage are critical even without any perturbation. In our experiments, we ignore these images and only focus on\nimages I where c(I) \u20ac t(NN(J), 1), which we refer to as good images (Definition|4).\nThe image [, can be generated in multiple ways, here we consider a class of sign-preserving\nperturbation functions defined as follows. Let PERT(J, p, x,y) be a function that takes as input an\n(b) perturbed (c) origi (d) perturbed\nTable 2: The row contains original images followed by misclassified images where only one pixel\n(pointed using a black arrow) was perturbed with perturbation parameter p = 1. After perturbation\nin the first case (images (a) and (b)) an automobile gets misclassified as a truck, and in the second\ncase (images (c) and (d)) a cat gets misclassified as a dog.\nWe also tried to understand the effect of larger perturbation parameter values. We set U to half th\u00ab\nnumber of pixels in each image. After usual training of the neural network using the training set (se\nSection |6|for more details about training), we ran Algorithm RANDADV on 1000 randomly draw1\nimages from the test set of the corresponding dataset. In our experiments, we varied perturbatior\nparameter in the range {1, 5, 10, 100}. Before we consider our results, we note some of the perturba\ntion values that we use to construct the adversarial image might construct images that are not in the\noriginal image spa\u2018 owever, these results are still somewhat surprising, because even though we\nallow large (even out-of-range) perturbation, it is applied to exactly one pixel in the image, and i\nappears that it suffices to even pick the pixel at random.\nFigures|T]and[2|show results for 4 datasets (more details about the datasets and the networks ar\u00e9\npresented in Section|6p. On the x-axis we show the perturbation parameter p. In Figure[T] the y-axis\nrepresents the output of Algorithm RANDADV averaged over good images for the networ he\nfirst observation that we can make is that the critical pixels are common, and in fact, as p grows\nthe fraction of critical pixels increases. For example, in CIFAR10, with p = 100, almost 80% (or\naverage) of the pixels randomly selected are critical. In Figure[2} the y-axis represents the fractior\nof successful adversarial images generated by Algorithm RANDADV, i.e., fraction of inputs where\nAlgorithm RANDADV is successful in finding at least one critical pixel. Again we notice that as 1\ngrows it gets easier for Algorithm RANDADV to construct an adversarial image.\nAnother observation is that for the MNIST and STL10 datasets, Algorithm RANDADV succeeds in\nfinding fewer critical pixels as compared to SVHN and CIFAR1O datasets. We give the following\nexplanation for this observation. The majority of pixels in an MNIST image belong to the background.\n\u201cWe fix this shortcoming using a local-search based strategy in the next section.\nTNote by focusing on good images, we make sure that we are only accounting for those cases where\nperturbation is needed for creating an adversarial image.\nOutput of Ag. RandAdy\n(avgragedgver ggod images)\n\nze ze ze\n\nPr) Soa Soa\n\noP? oe oP\nLomeli] ~ ves = ~\n\nPerturbation parameter\n\n(a) MNIST\n\nPerturbation parameter\n\n(b) SVHN\n\nPerturbation parameter\n\n(c) CIFARIO\n\nPerturbation parameter\n\n(d) STL1O\nFigure 1: Output of Algorithm RANDADvV (averaged over good images). The results are for two networks: a\nNetwork-in-Network and b) VGG. The perturbation parameter p is varied from {1, 5, 10, 100}.\nBos Bos Bos Bos\n= : : :\ni i\u201d i\u201d i\u201d\nB o2| 8 o2| 8 o2 8 o2|\n8 \u00a7 \u00a7 \u00a7\n3, in 3, \u2018an, =| 3\n\nPettubation parameter Perturbation parameter Perturbation parameter Perturbation parameter\n\n(a) MNIST (b) SVHN (c) CIFARIO (d) STL10\nFigure 2: Fraction of images where Algorithm RANDADV succeeds in finding at least one critical pixel. Again\nwe only start with only good images.\nhence, these pixels are less likely to be critical. On the other hand, STL10 contains high resolutior\nimages, 96 x 96, where perhaps a single pixel has less of an impact on the output prediction. The\nlatter observation motivated us to generalize the notion of a critical pixel to a critical set.\nDefinition 5 (Critical Set) Given a trained neural network NN and an image I, a critical set of Ii.\na set of pixels Urey t\u2122 x,y)} in I such that a perturbation of these pixels generates an image tha\n\nis misclassified by the network NN.\nThe general goal will be to find critical sets of small size in an image. With this notion of critica\nset, we considered constructing adversarial images on the high-resolution ImageNet1000 datase\n\nWe can modify the definition of I; ie (from i) where instead of a single pixel we perturb all th\npixels in a set. Similarly, we can devise a simple extension to Algorithm RANDADV to operate wit\na set of pixels and to output an unbiased estimate for the fraction of critical sets of some fixed siz\n(50 in our case) in the input imagef\u2019] Note that a set size of 50 pixels is still a tiny fraction of all th\npixels in a standard (center) crop of size 224 x 224, namely just 0.09%. We use a larger perturbatio1\nparameter p than before, and set (U) the budget on the number of trials on an image as 5000. Figure\nshows our results. Overall, we note that we can draw similar conclusions as before, i.e., increasin;\nthe perturbation parameter creates more critical sets making them easier to find and relatively smal\nperturbations are sufficient to construct adversarial images."}, {"section_index": "3", "section_name": ") BLACK-BOX GENERATION: A GREEDY APPROACH", "section_text": "8Searching over all pixel sets of size 50 pixels is computationally prohibitive, which again motivates the nee:\nfor a randomized strategy as proposed in Algorithm RANDADV.\nThe results from Section|4]show that most images have critical pixels such that modifying these pixels\nsignificantly leads to a failure of NN to classify the image correctly. However, one shortcoming\nof Algorithm RANDADV was that to build adversarial images, we sometimes had to apply a large\nperturbation to a single pixel (or a small set of pixels). Hence, there might exist a pixel (or a set of\npixels) in the adversarial image whose coordinate value could lie outside the valid range [LB, UB]\nTo overcome this issue, we need to redesign the search procedure to generate adversarial images\nthat still belong to the original image space I (defined in Section[3). Here a brute-force approach is\ngenerally not feasible because of computational reasons, especially in high-resolution images. Hence\nwe need to develop an efficient heuristic procedure to find the right small set of pixels to be perturbed\nOur solution presented in this section is based on performing a greedy local search over the image\nspace.\n3\n\n2a\n\n32\n\n& ge 0.8\n\n33 3\n\n< 806 BGs\n\n8 3\n\n2s ga\n\n3 S04 fe o04\n\nEo vet\n\n58 Fa\n\nEd ea] & ee. =\ni VGG CNN M 2048 (Caffe) se a VGG CNN M 2048 (Caffe) se\nfe} VGG ILSVRC 19 (Caffe) VGG ILSVRC 19 (Caffe)\n\n7000 Yon\n\noO\niS\n8\n\nT000\n\n\u201c500 \u201c500\nPerturbation parameter Perturbation parameter\n\n(a) ImageNet1000 (b) ImageNet 1000\nFigure 3: Experiments in Figures for the high-resolution ImageNet1000 dataset. The results are\nagain for good images from a set of 1000 randomly selected images. We use a slightly modified version of\nAlgorithm RANDADV that perturbs a set of 50 pixels.\nWe consider the general k-misclassification problem (Definition [Ip where an adversarial attacl\nensures that the true label does not appear in the top-k predictions of the network. We utilize ;\nlocal-search procedure, which is an incomplete search procedure that is widely used for solvin;\ncombinatorial problems appearing in diverse domains such as graph clustering, scheduling, logistics\nand verification (Lenstra\\|1997). For a general optimization problem it works as follows. Consider a1\nobjective eee ee \u2014 R where the goal is to minimize f(z). The local-search procedure\nworks in rounds, where each round consists of two steps. Let z;_1 be the solution iterate after rounc\ni \u2014 1. Consider round 7. The first step is to select a small subset of points Z = {Z1,...,Zn}, asc\ncalled local neighborhood, and evaluate f(z;) for every 2; \u20ac Z. Usually, the set Z consist of point:\nthat are close to current z;_1 for some measure of distance which is domain specific. The second stey\nselects a new solution z; taking into account the previous solution z;_; and the points in Z. Hence\nZi = 9(f(Zi-1), f(Z1),---, f(Zn)), where g is some pre-defined transformation function.\nWe adapt this general procedure to search critical sets efficiently as explained below. Our optimization\nproblem will try to minimize the probability that the network determines an perturbed image has the\nclass label of the original image, and by using a local-search procedure we generate perturbed images\nwhich differ from the original image in only few pixels. Intuitively, in each round, our local-search\nprocedure computes an implicit approximation to the gradient of the current image by understanding\nthe influence of a few pixels on the output, which is then used to update the current image.\n(a)\n\n(b)\n\nFirst, we need to define the cost function f. Let J be the image (with true label c(I)) whose\nadversarial image we want to generate for a target neural network NN. For some input image J,\n\nwe use the objective function f.7)(/) which equals the probability assigned by the network NN\nthat the input image Tt belongs to class c(I). More formally,\n\nfer) (L) = 0,(1) Where NN(/) = (01,---,0c),\n\nwith 0; denoting the probability as determined by NN that image I belongs to class 7. Our\nlocal-search procedure aims to minimize this function.\n\nSecond, we consider how to form a neighborhood set of images. As mentioned above, the local-\nsearch procedure operates in rounds. Let T,_ be the image after round i \u2014 1. Our neighborhood\nwill consist of images that are different in one pixel from the image T,_1. In other words, if we\nmeasure the distance between J;_; and any image in the neighborhood as the number of perturbed\npixels, then this distance is the same (equal to one) for all of them. Therefore, we can define the\nneighborhood in terms of a set of pixel locations. Let (Px, Py); be a set of pixel locations. For\nthe first round (Px , Py )o is randomly generated. At each subsequent round, it is formed based on\na set of pixel locations which were perturbed in the previous round. Let (Px, Py-);_1 denote the\npixel locations that were perturbed in round i \u2014 1 (formally defined below). Then\n\n(Px, Py)i = U U (x,y),\n\n{(a,b)\u20ac(PX ,P$)i-1} {we la\u2014d,a+d],ye[b\u2014d,b+d]}\nfen (L) =\n\n01) Where NN(I) =\n(Px, Py)i = U U (x,y)\n{(a,b)\u20ac(P%,Pp)i\u20141} {e\u20acla\u2014d,a+d],ye[b\u2014d,b-+d]}\n(c)\n\nwhere d 1s a parameter. In other words, we consider pixels that were perturbed 1n the previous\nround, and for each such pixel we consider all pixels in a small square with the side length 2d\ncentered at that pixel. This defines the neighborhood considered in round i.\n\nThird, we describe the transformation function g of a set of pixel locations. The function g takes as\n\ninput an image T, a set of pixel locations (Px, Py), a parameter t that defines how many pixels\nwill be perturbed by g, and two perturbation parameters p and r. In round i of the local-search\n\nprocedure, the function g(I;_1, (Px, Py)i-1,t,p,1r) outputs a new image, such that exactly t\npixels of I, are perturbed, and an auxiliary set of pixel locations (P<, P\u00a5:); to record which t\npixels where perturbed at this round, so we have (hi, (Px, P#)i) = g(Ti-a, (Px, Py)i-1,t,p,1r)-\nNext we describe transformations that g performs in round 7. As the first step, g constructs a set of\nperturbed images based on (Px, Py) ;-1:\n\nI= Us {PERT (Li-1,p, (2,y))}.\n(x,y)E(Px Py )i-1\nAlgorithm LOCSEARCHADV shows the complete pseudocode of our local-search procedure. At the\nhigh level, the algorithm takes an image as input, and in each round, finds some pixel locations tc\nperturb using the above defined objective function and then applies the above defined transformatior\nfunction to these selected pixels to construct a new (perturbed) image. It terminates if it succeed:\nto push the true label below the kth place in the confidence score vector at any round. Otherwise, i\nproceeds to the next round (for a maximum of R rounds). Note that the number of pixels in an image\nperturbed by Algorithm LOCSEARCHADY is at most \u00a2 x R and in practice (see Tables|4| Bland[\u00e9\nin Section|6) it is much less. In round 7, we query the network at most the number of times as the\nnumber of pixels in (Px, Py); which after the first round is at most 2d x 2d x t (again in practice\nthis is much less because of the overlaps in the neighborhood squares).\nIn Section{6} we demonstrate the efficacy of Algorithm LOCSEARCHADV in constructing adversaria\nimages. We first highlight an interesting connection between the pixels perturbed and their influence:\nmeasured by a notion of called saliency map.\nA Relation to Saliency Maps. [Simonyan et a.|(2014] [Simonyan et al.|(2014) introduced the notion of saliency map as\na way to rank pixels of the original images w.r.t. their influence on the output of the network. The\nintuition is that influential pixels in the saliency map are more likely to be important pixels that\nrepresent objects and, for example, can be used for weakly supervised object localization. Formally, let\nNN,,1) (J) denote the probability assigned to true class c(I) by the network NN on input I \u20ac Rexwxh\n\nLet Wn \u20ac RoX*\u2019**\" denote the derivative of NN.,;\\ with respect to the input evaluated at image J\nT= U {PeRT(L;_1,p, (x, y))}\n\n(@,y)\u20ac(Px Py )i-1\nVIEL: score(I [)= fen (L I),\nand it sorts (in decreasing order) images in Z based on the above score function to construct\nsorted(Z). Pixels whose perturbation lead to a larger decrease of f are more likely useful in\nconstructing an adversarial candidate. From sorted(Z), it records a set of pixel locations (P%, P});\nbased on the first t elements of sorted(Z), where the parameter t regulates the number of pixels\nperturbed in each round. Formally,\n(Px, Py)i = (2, y) + PERT(i-1, p, (x, y)) \u20ac sorted(Z)|1 : \u00a2]},\nwhere sorted(Z)[1 : \u00a2] represents the first t sorted images. in sorted(Z). Finally, 1; is constructed\nfrom [;_4 by perturbing each pixel in location (a, y) \u20ac (P%, Pj); with a perturbation value r. The\nperturbation is performed in a cyclic way (as explained i in \u2018Algorithm CYCLIC) so that we make\n\nsure that all coordinate values in I; are within the valid bounds of LB and UB. Note that at the end\nof every round 2, J; is a valid image from the image space I.\nWe want to point out that the function g uses two perturbation parameters, p and r. The value of\nr is kept small in the range [0, 2]. On the other hand, we do not put any explicit restrictions on the\nvalue of p. The best choice of p will be one that facilitates the identification of the \u201cbest\u201d pixels to\nperturb in each round. In our experiments, we adjust the value of p automatically during the search\n\nWe defer this discussion to the experimental section.\nOutput: Success/Failure depending on whether the algorithm finds an adversarial image or not\nInput: Image J with true label c(J) \u20ac {1,...,C}, two perturbation parameters p \u20ac R and r \u20ac [0, 2], and\nfour other parameters: the half side length of the neighborhood square d \u20ac N, the number of pixels perturbed\n\nat each round t \u20ac N, the threshold k \u20ac N for k-misclassification, and an upper bound on the number of rounds\nReN.\nThe saliency map of J is the matrix MM \u20ac R\u201c*\" such that M;,; = maxye(q [Wer (b, 7, y)|, where\nWy (0, 7, y) is the element of W,,;) corresponding to channel b and location (x,y). Pixels with\nhigher scores are considered more influential. In subsequent works, this notion has been extended to\nadversarial saliency maps that can be useful in generating adversarial perturbations (Papernot et al.\n\nB016c).\nComputing the exact saliency scores for an image requires complete access to the network NN, which\nwe do not assume. However, a natural hypothesis is that the pixels selected by Algorithm Loc\nSEARCHADV for perturbation are related to pixels with large saliency scores. We use the Ima\ngeNet1000 dataset to test this hypothesis. In Figure[3} we present some qualitative results. As can be\nseen from the pictures, the pixels perturbed by Algorithm LOCSEARCHADV appear correlated with\npixels with high saliency scores. Quantitatively, we observed that the pixels that occupy top-10%\nof the saliency map, on average contain more than 23% of the pixels chosen by Algorithm Loc\nSEARCHADV for perturbation (and this overlap only grows when we consider a bigger chunk of\npixels picked by their saliency scores). Note that this is correlation is not though a random occurrence\nFor an image I, let S; denote the set of pixels in J that rank among the top-10% in the saliency\nmap. If we pick a random set of around 200 pixels (this is on average number of pixels perturbed\nper image by Algorithm LOCSEARCHADV perturbs, see Table|5), we expect only about 10% to\nthem to intersect with S; and standard tail bounds show that the probability that at least 23% of\nthe pixels of this random set intersects with S; is extremely smal Q Therefore, it appears that Al\ngorithm LOCSEARCHADY rediscovers part of the high salient score pixels but without explicitly\ncomputing the gradients.\nDatasets. We use 5 popular datasets: MNIST (handwritten digits recognition dataset), CIFAR 1(\n(objects recognition dataset), SVHN (digits recognition dataset), STL10 (objects recognition dataset)\nand ImageNet1000 (objects recognition dataset).\nModels. We trained Network-in-Network and VGG (Simonyan & Zisserman\nfor MNIST, CIFAR, SVHN, STL10, with minor adjustments for the corresponding image sizes\nNetwork-in-Network is a building block of the commonly used GoogLeNet architecture that ha:\ndemonstrated very good performance on medium size datasets, e.g. CIFAR10\nVGG is another powerful network that proved to be useful in many applications beyond image\nclassification, like object localization (Ren et al.}|2015). We trained each model in two variants: wit\nand without batch normalization (Ioffe & Szegedy}||2015). Batch normalization was placed before\na ReLU layer in all networks. For the ImageNet1000 dataset, we used pre-trained VGG model:\nfrom (we did not train them from scratch due to limited resources). Al\nCaffe VGG models were converted to Torch models using the loadcaffe package (Zagoruyko| |2016a)\nThese models use different normalization procedures which we reproduced for each model basec\non provided descriptions. Tables and [5] the second column ERRTOP-1) show the top-1 (base\nerror for all datasets and models that we considered. The results are comparable with the knowr\n\nstate-of-the-art results on these datasets 2016).\nRelated Techniques. There are quite a few approaches for generating adversarial images (a:\ndiscussed in Section|2). Most of these approaches require access to the network architecture anc\nits parameter values (Szegedy et al. 2014} Goodfellow et al.| 2015} Moosavi-Dezfooli et al.| 2016\niPapernot et al 2016c The general idea behind these attacks is based on the evaluating the\nnetwork\u2019s sensitivity to the input components in order to determine a perturbation that achieves\nthe adversarial misclassification goal. Among these approaches, the attack approach (known as\n\u2019 Therefore, not entirely suited for a direct comparison with our black-box approach.\nWe start by describing our experimental setup. We used Caffe and Torch machine learning frameworks\nto train the networks. All algorithms to generate adversarial images were implemented in Lua within\nTorch 7. All experiments were performed on a cluster of GPUs using a single GPU for each run.\n\u00b0We can also use here a standard hypothesis testing for a proportion. The null-hypothesis is that the\nprobability of intersection equals 0.1 as with random Bernoulli trails, and test statistic Z = (0.23 \u2014\n0.1)/,/(0.1)(1 \u2014 0.1)/200 = 6.12 indicates that the null-hypothesis can be rejected at significance level\n0.01.\n\n1 _ _ oo re\nTable 3: Results on ImageNet1000 using VGG CNN-S (Caffe) network (Chatfield et al.}}2014a)\n\nColumns from left to right: the original image, top 150 pixels chosen according to their saliency\nscores (in white), the absolute difference between the perturbed image and the true image (the pixels\nthat are perturbed appear in white), and the perturbed image. Adversarial misclassification (rows\nfrom top to bottom): a ruffed grouse misclassified as a frilled lizard, an artichoke misclassified as a\nsleeping bag, a bubble misclassified as a fountain, and a hare misclassified as a cheetah.\nthe \u201cfast-gradient sign method\u201d) suggested by [Goodfellow et al.|(2015) stands out for being able to\n\nefficiently generate adversarial images. Here we compare the performance of our local-search based\nattack against this fast-gradient sign method!']\nFor completeness, we now briefly explain the fast-gradient sign method of |Goodfellow et al.| (2015p\nGiven an image Ip, a label a \u20ac {1,...,C}, and a network NN, the fast-gradient sign method\nperturbs I using the following update rule: [}\u00b0\"\u2019 = Ip + \u20ac - sign(V;=1, Loss(NN(J), a)) where\nsign(V;=1, Loss(NN(J), a)) is the sign of the network\u2019s cost function gradient (here Loss(NN(J), a)\ndenotes the loss function of the network NN given input J and class a). We vary a over all possible\nlabels in the dataset and choose the best result where this procedure is successful in generating an\nadversarial image. Without general guidelines for setting \u00ab, we experimented with several values of \u00ab\nstarting from 0.07 and increasing this number. We found that the value \u00ab = 0.47 ]was the smallest\nvalue where the fast-gradient sign method started to yield competitive performance compared to our\nalgorithm. Smaller values of \u20ac leads to generation of fewer adversarial images, e.g., at \u20ac = 0.1, the\npercentage of generated adversarial images is reduced by around 10% as compared to the value at\n\u20ac = 0.2 for the CIFAR10 dataset on the Network-in-Network model. Larger values of \u20ac tends to\n\u2018For the ImageNet1000 dataset, we set \u20ac differently as discussed later.\n\"Another reason for picking this approach for comparison is that it is also heavily utilized in the recent\nblack-box attack suggested by{Papernot et al. , where they require additional transferability assumptions\nwhich is not required by our attack.\n\nage a\ngenerate more adversarial images, but this comes at the cost of an increase in the perturbation. As we\ndiscuss later, our local-search based approach yields better results than the fast-gradient sign methoc\nin both the volume of adversarial images generated and the amount of perturbation applied. Another\nimportant point to remember is that unlike the fast-gradient sign method, our approach is based or\na weaker and more realistic assumption on the adversarial power, making our attacks more widely\napplicable.\nImplementing Algorithm LOCSEARCHADY. For each image J, we ran Algorithm Loc:\nSEARCHADV for at most 150 rounds, perturbing 5 pixels at each round, and use squares of side\nlength 10 to form the neighborhood (i.e., R = 150,t = 5,d = 5). With this setting of parameters\nwe perturb a maximum of t x R = 750 pixels in an image. The perturbation parameter p was\nadaptively adjusted during the search. This helps in faster determination of the most helpful pixels in\ngenerating the adversarial image. Let J be the original image. For some round 7 of the algorithm\ndefine 6.(1) = av (zy) {Oc1) : (,y) \u20ac (Px, Py)i-1}, where o,(1) is the probability assigned to\n\nclass label c(I) in NN(PERT(J;-1,p, 2, y)) (here 6,7) provides an approximation of the average\nconfidence of the network NN in predicting the true label over perturbed images). At each round\nwe increase the value of p if 0,7) is close to one and decrease p if 0,1) is low, e.g., below 0.3. For\nAlgorithm CYCLIC, we set r = 3/2. To avoid perturbing the most sensitive pixels frequently, we\nmake sure that if a pixel is perturbed in a round then we exclude it from consideration for the next 30\nrounds.\nfellow et al.\n\nExperimental Observations. For ease of comparison with the fast-gradient sign method wie\n\n5), we set k = 1 and focus on achieving 1-misclassification. Tables[4]and show\n\nthe results of our experiments on the test sets. The first column shows the dataset name. The sec\nond column (ERRTOP- 1) presents the top-1 misclassification rate on the corresponding test datase\nwithout any perturbation (base error). ERRTOP-1(ADV) is the top-1 misclassification rate where\neach original image in the test set was replaced with an generated perturbed image (using either ow\n\napproach or the fast-gradient sign method\n\nGoodfellow et al.\n\n2015) which is denoted as FGSM)3\nIn the following, we say an adversarial generation technique ADV, given an input image J, succeeds\nin generating an adversarial image ADv(J) for a network NN iff c(I) \u20ac 7(NN(J), 1) and c(I) \u00a2\nm(NN(ADvV(J)), 1). The CONF column shows the average confidence over all successful adversarial\nimages for the corresponding technique. The PTB column shows the average (absolute) perturbation\nadded per coordinate in cases of successful adversarial generation. More formally, let T denote the\ntest set and Ta yy \u00a9 7 denote the set of images in 7 on which ADV is successful. Then,\n1 1\n\nPTB I(b,x \u2014 Adv(I)(b, x, y)|,\n\nTal De Tew xh 2 Mlb) (D(b.\u00ab,y)},\nADV IETaA py b.x,y\nwhere J \u20ac R\u2018*\u201d*\" is the original image and ADv(I) \u20ac R\u2018*\u2019*\" is the corresponding adversarial\nimage. Note that the inner summation is measuring the Lj-distance between J and ADv(I). The\n#PTBPIXELS column shows the average percentage of perturbed pixels in the successful adversarial\nimages. Similarly, TIME column shows the average time (in seconds) to generate a successful\nadversarial image. Finally, the last column indicates the type of network architecture.\nAs is quite evident from these results, Algorithm LOCSEARCHADV is more effective than the fast-\ngradient sign method in generating adversarial images, even without having access to the network\narchitecture and its parameter values. The difference is quite prominent for networks trained with\nbatch normalization as here we noticed that the fast-gradient sign method has difficulties producing\nadversarial images. *| Another advantage with our approach is that it modifies a very tiny fraction\nof pixels as compared to all the pixels perturbed by the fast-gradient sign method, and also in\nmany cases with far less average perturbation. Putting these points together demonstrates that\n\"Note that by explicitly constraining the number of pixels that can be perturbed, as we do in our approach\n\nit might be impossible to get to a 100% misclassification rate on some datasets. Similarly, the fast-gradient\n\nsign method fails to achieve a 100% misclassification rate even with larger values of \u20ac (Moosavi-Dezfooli et al.\n016).\nIn general, we observed that models trained with batch normalization are somewhat more resilient tc\nadversarial perturbations probably because of the regularization properties of batch normalization Coffe \u00e9\n\nSzegedy| 201 5).\nTable[5]shows the results for several variants of VGG network trained on the ImageNet1000 dataset\nThese networks do not have batch normalization layers (Chatfield et al.| 20146} [Zagoruyko} [2016a]\nWe set \u20ac = 1 for the fast-gradient sign method as a different pre-processing technique was used fo\nthis network (we converted these networks from pre-trained Caffe models). Results are similar tc\nthat observed on the smaller datasets. In most cases, our proposed local-search based approach is\n\nmore successful in generating adversarial images while on average perturbing less than 0.55% of the\npixels.\nCase of Larger k\u2019s. We now consider achieving k-misclassification for k > 1 using Algo\nrithm LOCSEARCHADV. In Table (6 we present the results as we change the goal from 1\nmisclassification to 4-misclassification on the CIFAR10 dataset. We use the same parameter:\nas before for Algorithm LOCSEARCHADV. As one would expect, as we increase the value of k\nthe effectiveness of the attack decreases, perturbation and time needed increases. But overall ou\nlocal-search procedure is still able to generate a large fraction of adversarial images at even k = -\nwith a small perturbation and computation time, meaning that these images will fool even a systen\nthat is evaluated on a top-4 classification criteria. We are not aware of a straightforward extension o\nthe fast-gradient sign method (Goodfellow et al.][2015) to achieve k-misclassification.\nWe trained several modifications of Network-in-Network model for the CIFAR10 dataset, varying\nthe initial value of the learning rate, the size of filters, and the number of layers in the network. We\nobserved that between 25% to 43% of adversarial images generated by Algorithm LOCSEARCHADV\nusing the original network were also adversarial for these modified networks (at k = 1). The\ntransferability of adversarial images that we observe here has also been observed with other attacks\n\ntoo (Szegedy et al.| {2014} (Goodfellow et al.|/2015} [Papernot et al. and demonstrates the\n\nwider applicability of all these attacks."}, {"section_index": "4", "section_name": "7 CONCLUSION", "section_text": "We investigate the inherent vulnerabilities in modern CNNs to practical black-box adversarial attack:\nWe present approaches that can efficiently locate a small set of pixels, without using any gradien\ninformation, which when perturbed lead to misclassification by a deep neural network. Our extensivi\nexperimental results, somewhat surprisingly, demonstrates the effectiveness of our simple approache:\nin generating adversarial examples.\nFinally, we believe that our local-search approach can also be used for attacks against other machine\nlearning systems and can serve as an useful tool in measuring the robustness of these systems.\nAlgorithm LOCSEARCHADV is successful in generating more adversarial images than the fast-\ngradient sign method, while modifying far fewer pixels and adding less noise per image. On the other\nside, the fast-gradient sign method takes lesser time in the generation process and generally seems to\nproduce higher confidence scores for the adversarial (misclassified) images.\nEven Weaker Adversarial Models. We also consider a weaker model where the adversary does\nnot even have a black-box (oracle) access to the network (NN) of interest, and has to rely on a\nblack-box access to somewhat of a \u201csimilar\u201d (proxy) network as NN. For example, the adversary\nmight want to evade a spam filter A, but might have to develop adversarial images by utilizing the\noutput of a spam filter B, which might share properties similar to A.\nDefenses against these attacks is an interesting research direction. However, we note that here that by\nlimiting the perturbation to some pixels (being localized) the adversarial images generated by our\nlocal-search based approach do not represent the distribution of the original data. This means for\nthese adversarial images, the use of adversarial training (or fine-tuning), a technique of training (or\nfine-tuning) networks on adversarial images to build more robust classifiers, is not very effective. In\nfact, even with adversarial training we noticed that the networks ability to resist new local-search\nbased adversarial attack improves only marginally (on average between 1-2%). On the other hand.\nwe suspect that one possible counter-measure to these localized adversarial attacks could be based on\nperforming a careful analysis of the oracle queries to thwart the attempts to generate an adversarial\nimage.\nTable 4: Results for four datasets: CIFAR10, STL10, SVHN, and MNIST. The entries denote by\ndenoted by \u201c\u2014\u201d are the cases where the fast-gradient sign method fails to produce any adversarial\nimage in our experimental setup.\nTable 5: Results for the ImageNet1000 dataset using a center crop of size 224 x 224 for each image.\n#PTBPIXELS\n\nTIME\n\nDataset k ErrTop-k ErRTop-k(ADV) CONF PTB Network\n(%) (in sec)\nCIFARIO 1 16.54 97.89 0.72 0.04 3.24 0.58 NinN\u2019\nCIFARIO 2 6.88 76.65 0.88 0.07 5.50 1.02 NinN\nCIFARIO 3 3.58 59.02 0.90 0.08 7.09 1.85 NinN\nCIFARIO 4 1.84 48.89 0.90 0.09 7.63 2.12 NinN\nTable 6: Effect of increasing k on the performance of Algorithm LOCSEARCHADV (without batcl\nnormalization).\nTechnique\nDataset ERRTop-1 ERRTop-1(ADV) CONF PTB Technique Network\n(%) (in sec)\nNNs trained with batch normalization\nCIFARIO 165 97.63 OAT | 0.04 3.15 0.68 LOCSEARCHADV Ons) NinN\nCIFARIO ; 70.69 0.55 | 0.20 100.00 0.01 FGSM NinN\nCIFARIO 1162 9751 0.74 | 0.04 3.16 0.78 LOCSEARCHADV (Ours VGG\nCIFAR10 : 11.62 = = = = FGSM VGG\nSTLIO 2081 38.17 042 | 0.02 1.20 LOCSEARCHADV Ons) NinN\nSTLIO : 54.85 0.53 | 0.20 100.00 FGSM NinN\nSTLIO 26.50 65.76 04T | 0.02 Tt LOCSEARCHADV oe VGG\nSTLIO : 26.50 - - - - FGSM {Goodfellow et al.}[2015 VGG\nSVHN O71 97.06 OAT | 0.05 a1 T.02 LOCSEARCHADV (Ours) NinN\nSVHN : 48.62 0.49 | 0.20 100.00 0.02 _|_FGSM {Goodfellow et al.[2015) |_NinN\nSVHN an 81.10 0.66 | 0.07 543 2.15 LOCSEARCHADV (Ours) VGG\nSVHN : 477 - - - - FGSM VGG\nMNIST 033 9142 0.54 | 0.20 224 0.64 LOCSEARCHADV Due) NinN\nMNIST \u201d 1.65 0.58 | 0.20 100.00 0.02 | FGSM NinN\nMNIST _ 93.48 063 | 021 220 0.64 LOCSEARG VGG\nMNIST : 0.44 - - - - FGSM VGG\nNNs trained without batch normalization\n\nCIFARTIO 16.54 9739 072 | 0.04 3.24 058 LOCSEARCHADV nD) NinN\nCIFARIO : 93.67 0.93 | 0.20 100.00 0.02 | FGSM NinN\nCIFARIO 19.79 97.98 O77 | 0.04 2.99 0.72 LOCSEARCHADV VGG\nCIFARIO : 90.93 0.90 | 0.20 100.00 0.04 | _FGSM {Goodfellow et 7 2015) VGG\nSTLIO aeaT 52.65 056 | 0.02 TIT 642 LOCSEARCHADV (Ours) NinN\nSTLIO : 87.16 0.94 | 0.20 100.00 0.04 _| _FGSM {Goodfellow et al.][2015 NinN\nSTLIO Bol 59.38 052 | 001 T.09 19.65 LOCSEARCHADV (Ours) VGG\nSTLIO : 91.36 0.93 | 0.20 100.00 0.10 | FGSM ae VGG\nSVHN 615 92.31 0.68 | 0.05 434 1.06 LOCSEARCHADV Ou) NinN\nSVHN a 73.97 0.84 | 0.20 100.00 0.01 FGSM NinN\nSVHN 731 88.34 0.68 0.05 4.09 1.00 LOCSEARCHADV NinN\nSVHN - 16.78 0.89 | 0.20 100.00 0.04 | FGSM (Goodfellow et I VGG\n#PTBPIXELS\n\nTIME\n\nDataset ErRTop-1 ErrTop-1(ADvV) CONF PTB (%) (in sec) Technique Network\nImageNet1000 58.27 93.59 0.29 0.29 0.43 12.72 LOCSEARCHADV (Ours. VGG CNN-S (Caffe)\nImageNet1000 \u2014 85.51 0.49 1.00 100.00 4.74 FGSM (Goodfellow et al.}}2015| GG CNN-S (Caffe)\nImageNet1000 58.96 91.36 0.28 0.29 0.40 10.01 LOCSEARCHADV (Ours. VGG CNN-M (Caffe)\nImageNet1000 \u00b0 87.85 0.48 1.00 100.00 4.36 FGSM (Goodfellow et al.}}2015| VGG CNN-M (Caffe)\nImageNet1000 58.80 92.82 0.29 0.30. 0.41 11.09 LOCSEARCHADV (Ours. VGG CNN-M 2048 (Caffe)\nImageNet1000 \u00b0 88.43 0.52 1.00 100.00 4.42 FGSM (Goodfellow et al.}}2015| VGG CNN-M 2048 (Caffe)\nImageNet1000 46.40 72.07 0.30 0.54 0.55 73.64 LOCSEARCHADV (Ours. VGG ILSVRC 19 (Caffe)\nImageNet1000 \u201d 85.05 0.52 1.00 100.00 23.94 FGSM (Goodfellow et al.|]2015) VGG ILSVRC 19 (Caffe)\n\u2018VGG CNN-M 2048 (Caffe)\n}_ | VGG CNN-M 2048 (Caffe)\n\nVGG TT SVRC 19 (Caffe)\nThe authors would like to thank Hamid Maei for helpful initial discussions."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, anc\nLi Zhang. Deep learning with differential privacy. In ACM CCS, 2016.\nEli Biham and Adi Shamir. Differential cryptanalysis of des-like cryptosystems. Journal of CRYP.\nTOLOGY, 4(1):3-72, 1991.\nKen Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Return of the devil in th\ndetails: Delving deep into convolutional nets. In BMVC, 2014b.\nAlhussein Fawzi, Omar Fawzi, and Pascal Frossard. Analysis of classifiers\u2019 robustness to adversarial\nperturbations. CoRR, abs/1502.02590, 2015.\nAlhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers:\nfrom adversarial to random noise. arXiv preprint arXiv: 1608.08967, 2016.\nIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial\nexamples. In JCLR, 2015.\nShixiang Gu and Luca Rigazio. Towards deep neural network architectures robust to adversaria\nexamples. In JCLR Workshop, 2015.\nSergey loffe and Christian Szegedy. Batch normalization: Accelerating deep network training by\nreducing internal covariate shift. In JCML, pp. 448-456, 2015.\nJan Karel Lenstra. Local search in combinatorial optimization. Princeton University Press, 1997\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. In JCLR, 2014.\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and\naccurate method to fool deep neural networks. In CVPR, 2016.\nIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MIT\n\nPress, 2016. URLi/http: //www. deeplearningbook.org\nAlexey Kurakin, lan Goodfellow, and Samy Bengio. Adversarial examples in the physical world.\narXiv preprint arXiv: 1607.02533, 2016.\nPatrick McDaniel, Nicolas Papernot, and Z. Berkay Celik. Machine learning in adversarial settings.\nIEEE Security & Privacy, 14(3):68\u201472, 2016.\nShaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. Faster R-CNN: towards real-time objec\ndetection with region proposal networks. In NIPS, pp. 91-99, 2015.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image\nrecognition. arXiv preprint arXiv: 1409. 1556, 2014.\nKaren Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks\nVisualising image classification models and saliency maps. In JCLR Workshop. 2014.\nChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellov\nand Rob Fergus. Intriguing properties of neural networks. In JCLR, 2014."}]
SyJNmVqgg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "With large amount of training data as its fuel, deep neural networks (DNN) have achieved state-\nof-art performances in multiple tasks. Examples include deep convolutional neural network (CNN)\n\nfor image understanding (Krizhevsky et al 4 2012} Ioffe & Szegedy | 2015 I He et al. | 2015} {Ren\n\nSota Ko and recurrent neural networks (RNN) for natural language processing (Cho et al]\n\nSota Kiros et al. 2015} Dai & Le| 2015} Shang et al.| 2015p. To effectively train DNN with Targe\n\nscale of data, eer me -batch based Stochastic Gradient Descent (SGD) (and its variants such\n\nas Adagrad 2011), Adadelta (Zeiler| (2012) and Adam (Kingma & Ba} 2014) is\nused. The mini oe based sch training is a st EL process, in which mini-batches of date\nD= {Di,-- Dr} arrive sequentially in a random order. Here Dy = (di,---,das) is\n\nthe mini- batch ot data arriving at the t-th time step and consisting of 1 training instances. After\nreceiving D, at t-th step, the loss and gradient w.r.t. current model parameters W are Ly = aal(din)\n\nand g; = he based on which the neural network model gets updated:\nHere [(-) is the loss function specified by the neural network and 7, is the learning rate at t-th step.\nWith the sequential execution of SGD training, the neural network evolves constantly from a raw\nstate to a fairly mature state, rendering different views even for the same training data. For example,\nas imposed by the spirit of Curriculum Learning (CL) (Bengio et al.|{2009) and Self-Paced Learning\n(SPL) (Kumar et al.|/2010), at the baby stage of the neural network, easy examples play important\n\nroles whereas hard examples are comparatively negligible. In contrast, at the adult age, the neural\n\u201cWorks done when Yang Fan is an intern at Microsoft Research Asia."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Wir = Wi -\u2014 19 t-\nFigure 1: Basic structure of SGD accompanied with NDF. Blue part refers to SGD training process\nand yellow part is NDF.\nnetwork tends to favor harder training examples, since easy ones bring minor changes. It remains ar\nimportant question that, how to optimally and dynamically allocate training data at different stage:\nof SGD training?\nA possible approach is to solve this problem in an active manner: at each time step t, the mini-\nbatch data D; is chosen from all the left untrained data 2016} /Sachan & Xing\n\n. However, this typically requires a feed-forward pass over the whole remaining dataset at\neach training step, making it computationally expensive. We therefore consider a passive way in this\npaper, in which the random ordering of all the mini-batches is pre-given and maintained during the\ntraining process. What actually do is, after receiving the mini-batch D, of M training instances, we\ndynamically determine which instances in D, are used for training and which are filtered, based or\nthe features extracted from the feedforward pass only on D;. Acting in this way avoids unnecessary\ncomputational steps on those filtered data and thus speeds-up the training process.\nPrevious works such as curriculum learning (CL) and self-paced learning (SPL) can be leveraged to\nfulfill such a data filtration task. However, they are i Geaiveral based on simple heuristic rules, such as\nshuffling the sequence length to train language model (2009), or abandoning training\ninstances whose loss values are larger than a human- pace ned threshold So et al. [Kumar et al.| 2010} J iang\n\n20 14a).\nIn this work, we propose a Neural Data Filter (NDF) framework from a more principled and self.\nadaptive view. In this framework, as illustrated in Figure[I] the SGD training for DNN is naturally\ncasted into a Markov Decision Process (MDP) (Sutton & Barto|/1998) and data filtration strategy i:\nfully controlled through deep reinforcement learning (Mnih et al.|[2013}(Lillicrap et al.|[2015b}|[Mnil\n(2016). In such an MDP, a state (namely s1,---,5;,--+) 1s composed of two parts: the mini\nbatch of data arrived and the parameters of the current neural network model, i.e, 5, = {D,, W,}\nIn each time step t, NDF receives a representation f(s;) for current state from SGD, outputs the\naction a, specifying which instances in D, will be filtered according to its policy A,;. Afterwards\nthe remaining data determined by a; will be used by SGD to update the neural network state anc\ngenerate a reward r; (such as validation accuracy), which will be leveraged by NDF as the feedback\nfor updating its own policy.\nFrom another view, while SGD acts as the trainer for base model, i.e., DNN, it meanwhile is the\ntrainee of reinforcement learning module. In other words, reinforcement learning acts at the teach-\ner module while SGD for DNN is the student. Speaking more ambitiously, such a teacher-student\nframework based on reinforcement learning goes far beyond data filtration for neural network train-\ning: On one hand, the base model the can be benefitted is not limited to neural networks; on the other,\nthe action space in reinforcement learning teacher module covers any strategies in machine learning\nprocess, such as hyper-parameter tuning and distributed scheduling. Through carefully designed\ninteraction between the two modules, the training process of general machine learning models can\nbe more elaborately controlled.\nThe rest of the paper is organized as follows: in the next section |2| we will introduce the details\nof Neural Data Filter (NDF), including the MDP language to model Stochastic Gradient Descent\ntraining, and the policy gradient algorithms to learn NDF. Then in section [3] the empirical results\nof training LSTM RNN will be shown to verify the effectiveness of NDF. We discuss related work\nin subsequent section |4]and conclude the paper in the last section|5]\nWe introduce the mathematical details of Neural Data Filter (NDF) for SGD training in this section.\nAs a summary, NDF aims to filter certain amount of training data within a mini-batch, in order to\nachieve better convergence speed for SGD training. To achieve that, as introduced in last section\nand Figure[I] we cast Stochastic Gradient Descent training for DNN as a Markov Decision Process\n(MDP), termed as SGD-MDP.\nSGD-MDP: As traditional MDP, SGD-MDP is composed of the tuple < s,a,P,r,y >, illustrate:\nqc\nA(s,a;0) = Po(als) = ao(Of(s) +b) + (1\u2014a)(1 \u2014 o (6 f(s) + 6))\n\"We consider data instances within the same mini-batch are independent with each other, therefore for\nstatement simplicity, when the context is clear, a will be used to denote the remain/filter decision for single\ndata instance, i.e., a \u20ac {0,1}. Similarly, the notation s will sometimes represent the state for only one training\ninstance.\n8 is the state, corresponding to the mini-batch data arrived and current neural network state:\n8, = (Di,W).\n\na represents the actions space and for data filtration task, we have a = {a,,}M_, \u20ac\n{0,1}, where M is the batch size and a,, \u20ac {0,1} denotes whether to filter the m\u2018\u201d\ndata instance in D; or nof'] Those filtered instances will have no effects to neural network\ntraining.\n\n\u201c., = P(s'|s,a) is the state transition probability, determined by two factors: 1) The\n\nss!\n\nuniform distribution of sequentially arrived training batch data; 2) The optimization process\nspecified by Gradient Descent principle (c.f. equation[1). The randomness comes from\nstochastic factors in training, such as dropout (Srivastava et al. .\n\nr = r(s,a) is the reward, set to be any signal indicating how well the training goes, such as\nvalidation accuracy, or the lost gap for current mini-batch data before/after model update.\n\nFurthermore future reward r is discounted by a discounting factor 7 \u20ac [0, 1] into the cumu-\nlative reward.\nNDF samples the action a by its policy function A = Po(a|s) with parameters \u00a9 to be learnt. For\nexample, NDF policy A can be set as logistic regression:\nState Features: The aim of designing state feature vector f(s) is to effectively and efficiently\nrepresent SGD-MDP state. Since state s includes both arrived training data and current neural\nnetwork state, we adopt three categories features to compose f(s):\ne Data features, contains information for data instance, such as its label category (we\nuse 1 of |Y| representations), the length of sentence, or linguistic features for text seg-\n\nments (Tsvetkov et al.}/2016). Data features are commonly used in Curriculum Learning\n(Bengio et al.{[2009|[Tsvetkov et al.| |2016).\n\ne Neural network features, include the signals reflecting how well current neural network\nis trained. We collect several simple features, such as passed mini-batch number (..e.,\niteration), the average historical training loss and current validation accuracy. They are\nproven to be effective enough to represent current neural network status.\n\ne Features to represent the combination of both data and model. By using these features, we\ntarget to represent how important the arrived training data is for current neural network.\nWe mainly use three parts of such signals in our classification tasks: 1) the predicted prob-\nabilities of each class; 2)the cross-entropy loss, which appears frequently in Self-Paced\nThe state features f(s) are computed once each mini-batch training data arrives.\nThe whole process for training neural networks is listed in Algorithm|1} In particular, we take th\nsimilar generalization framework proposed in (Andrychowicz et al.|/2016), in which we use pa\nof training data to train the policy of NDF (Step I and 2), and apply the data filtration model to th\ntraining process on the whole dataset (Step 3). The detailed algorithm to train NDF policy will b\nintroduced in the next subsection.\nAlgorithm 1 Training Neural Networks with Neural Data Filter.\nInput: Training Data D.\n\n1. Sample part of NDF training data D\u2019 from D.\n\n2. Optimize NDF policy network A(s; @) (c.f. equation|2) based on D\u2019 by policy gradient\n3. Apply A(s; \u00a9) to full dataset D to train neural network model by SGD.\n\nOutput: The Neural Network Model.\nNDF-REINFORCE. NDF-REINFORCE is based on REINFORCE algorithm ( {1992}, an\nelegant Monto-Carlo based policy gradient method which favors action with high sampled reward.\nThe algorithm details are listed in Algorithm [2| Particularly, as indicated in equation |3| NDF-\nREINFORCE will support data filtration policy leading to higher cumulative reward v;.\nAlgorithm 2 NDF-REINFORCE algorithm to train NDF policy."}, {"section_index": "2", "section_name": "NDF-ActorCritic.", "section_text": "The gradient estimator in REINFORCE poses high variance given its Monto-Carlo nature. Further-\nmore, it is quite inefficient to update policy network only once in each episode. We therefore design\nNDF-ActorCritic algorithm based on value function estimation. In NDF-ActorCritic, a parametric\nvalue function estimator Q(s,a;W) (i.e., a critic) with parameters W for estimating state-action"}, {"section_index": "3", "section_name": "margin value}?\n\n(2016); 3) th", "section_text": "Policy gradient methods are adopted to learn NDF policy A. In particular, according to different\npolicy gradient methods, we designed two algorithms: NDF-REINFORCE and NDF-ActorCritic.\nIgorithm 2 NDF-REINFORCE algorithm to train NDF policy.\n\nInput: Training data D\u2019. Episode number L. Mini-batch size M. Discount factor + \u20ac [0, 1].\nfor each episode / = 1,2,---,L do\nInitialize the base neural network model.\nShuffle D\u2019 to get the mini-batches sequence D\u2019 = {D, Do,---, Dr}.\nfort =1,---,T do\nSample data filtration action for each data instance in D, = {dj,---,dyc}: a =\n{am }M_1, dm \u00ab A(S8m,4; 9), Sm is the state corresponding to the d,,\nUpdate neural network model by Gradient Descent based on the selected data in D;.\nReceive reward r;.\nend for\nfort =1,---,T do\nCompute cumulative reward vy, = 7; + yrigi +77 + tre.\nUpdate policy parameter O:\n\n0+ O+ay, > d log ACS a 0) 3\n\nm\n\nend for\nend for\nOutput: The NDF policy network A(s,a; 0).\nO+o04 av >> Poe Asani)\n\nm\nQ(s,a;W) = o(wi relu(f(s)W a) +),\nAlgorithm 3 NDF-ActorCritic algorithm to train NDF policy."}, {"section_index": "4", "section_name": "3.1 EXPERIMENTS SETUP", "section_text": "We conduct experiments on two different tasks/models: IMDB movie review sentiment classifi-\ncation (with Recurrent Neural Network) and MNIST digital image classification (with Multilayer\nPerceptron Network). Different data filtration strategies we applied to SGD training include:\nvalue function is leveraged to avoid the high variance of v, from Monto-Carlo sampling in NDF-\nREINFORCE. It remains an open and challenging question that how to define optimal value function\nestimator ((s,a;W) for SGD-MDP. Particularly in this work, as a preliminary attempt, the follow-\ning function is used as the critic:\nwhere f(s) = (f (51); f(s2);-++, f(Sac)) is a matrix with M rows and each row f(s) represents\nstate features for the corresponding training instance d,,. W = {wo, Wi, b} is the parameter set\nto be learnt by Temporal-Difference algorithm. Base on such a formulation, the details of NDF-\nActorCritic is listed in Algorithm [3]\nRUE RERARER wD TNE MT LAURE IAIN ATURE YC INE pai:\n\nInput: Training data D\u2019. Episode number L. Mini-batch size M. Discount factor + \u20ac [0, 1].\nfor each episode / = 1,2,---,L do\n\nInitialize the base neural network model.\n\nShuffle D\u2019 to get the mini-batches sequence D\u2019 = {D, D2,---, Dr}.\n\nfort =1,---,T do\n\nSample data filtration action for each data instance in D, = {dj,---, a=\n{am}\u2122_1, dm \u00ab A(S8m,4; 9), Sm is the state corresponding to the dy\u00bb, . s = {8m }M_,.\nUpdate neural network model by Gradient Descent based on the selected data.\nReceive reward r;.\nUpdate policy(actor) parameter 9: 0 <~ 0 + aQ(s,a;W) >>, Slog Aleami\u00ae)\nUpdate critic parameter W:\n\u00ab 1 Ob\nq=r1-1+7Q(s,a;W) \u2014 Q(s',a')W), W=W- ge) (5)\naea,s'ts\nend for\nend for\n\nOutput: The NDF policy network A(s,a; 0).\naQ(s',a'5W)\n\nd=T-1+7Q(s,a;W) \u2014 Q(s',a';W), W=W-8q aw\ne Unfiltered SGD. The SGD training algorithm without any data filtration. Here rather thar\n\nvanilla sgd (c.f. equation|I}, we use its advanced variants such as Adadelta 2012)\nor Adam (Kingma & Bal|2014) to each of the task.\ne Self-Paced Learning (SPL) 2010). It refers to filtering training data by\noss value. M\n\nits \u2018hardness\u2019, as reflected by athematically speaking, those training data a\nsatisfying 1(d) > 7 will be filtered out, where the threshold 7 grows from smaller to larget\nduring training process.\n\nIn our implementation, to improve the robustness of SPL, following the widely used trick\n\n(Jiang et al.||2014b), we filter data using its loss rank in one mini-batch, rather than the\n\nabsolute loss value. That is to say, we filter data instances with top K largest training losses\nwithin a M/-sized mini-batch, where KC linearly drops from M \u2014 1 to 0 during training.\n\ne NDF-REINFORCE. The policy trained with NDF-REINFORCE, as shown in Algorithm\n|\nWe use a signal to indicate training speed as reward. To be concrete, we set an accuracy\nthreshold + \u20ac [0,1] and record the first mini-batch index i, in which validation accuracy\nFor all strategies other than Plain SGD, we make sure that the base neural network model will\nnot be updated until / un-trained, yet selected data instances are accumulated. In that way we\nmake sure that the batch size are the same for every strategies (i.e., 1/), thus convergence speed is\nonly determined by the effectiveness of data filtration strategies, not by different batch size led by\ndifferent number of filtered data. For NDF strategies, we initialize b = 2 (c.f. equation|2), with the\ngoal of maintaining training data at the early age, ee Tore optimize\nthe policy. The model is implemented with Theano (Theano Development Team||2016) and run on\none Telsa K40 GPU.\nIMDB movie review datasef*|is a binary sentiment classification dataset consisting of 50k movie\nreview comments with positive/negative sentiment labels (2011). We apply LSTM\n(Hochreiter & Schmidhuber 1997) RNN to each sentence, and the last hidden state of LSTM is fed\n\ninto a logistic regression classifier to predict the sentiment label (Dai & Le||2015). The model size\n(i.e., word embedding size x hidden state size) is 256 x 512 and mini-batch size is set as M = 16.\n\nAdadelta (2012) is used to perform LSTM model training.\nThe detailed results are shown in Figure A whose z-axis represents the number of effective training\ninstances and y-axis denotes the accuracy on test dataset. All the curves are results of 5 repeated\nruns. From the figure we have the following observations:\nhttp://ai.stanford.edu/~amaas/data/sent iment /\nexceeds 7, then the reward is set as rp = \u2014log(r/T). Note here only terminal rewar\nexists (i.e., 7, = 0,Vt < T).\n\nNDF-ActorCritic. The policy trained with NDF-ActorCritic, as shown in Algorithm [3\nDiscount factor is set as 7 = 0.95.\n\nSince actor-critic algorithm makes it possible to update policy per time step, rather than pe\nepisode, different with the terminal reward set in NDF-REINFORCE, validation accurac\nis used as the immediate reward for each time step. To save time cost, only part of validatio:\nset is extracted to compute validation accuracy.\n\nRandomly Drop. To conduct more comprehensive comparison, for NDF-REINFORC]\nand NDF-ActorCritic, we record the ratio of filtered data instances per epoch, and the:\nrandomly filter data in each mini-batch according to the logged ratio. In this way we for\nm two more baselines, referred to as RandDropREINFORCE and RandDropActorCriti\n\nrecnectivelu\nThe IMDB dataset contains 25k training sentences and 25k test sentences. For NDF-REINFORCE\nand NDF-ActorCritic, from all the training data we randomly sample 10k and 5k as the train-\ning/validation set to learn data filtration policy. For NDF-REINFORCE, the validation accuracy\nthreshold is set as T = 0.8. For NDF-ActorCritic, the size of sub validation set to compute imme-\ndiate reward is 1k. The episode number is set as L = 30. Early stop on validation set is used to\ncontrol training process in each episode.\ne@ NDF (shown by the two solid lines) significantly boosts the convergence of SGD training\nfor LSTM. With much less data, NDF achieves satisfactory classification accuracy. Fo!\nexample, NDF-REINFORCE achieves 80% test accuracy with only roughly half training\ndata (about 40k) of Plain SGD consumes (about 80k). Furthermore, NDF significantly\noutperforms the two Randomly Drop baselines, demonstrating the effectiveness of learn\npolicies.\n\ne Self-Paced Learning (shown by the red dashed line) helps for the initialization of LSTM.\nhowever, it delays training after the middle phrase.\n\ne For the two variants of NDF, NDF-REINFORCE performs better than NDF-ActorCritic\nOur conjecture for the reason is: 1) For NDF-REINFORCE, we use a terminal reward fully\ndevoted to indicate training convergence; 2) The critic function (c-f., equation [4p may no\nbe expressive enough to approximate true state-action value functions. Deep critic functior\nshould be the next step.\nTest Accuracy\n\n70000 0000 woo so008 700000 T2000\nNumber of Training Instances\nRandomDropREINFORCE \u2014 --~ SPL \u2014 NDF -~ ActorCritic\n\nRandomDropActorCritic --- UnfilteredSGD ~~ NDF~\u2014 REINFORCE\nFigure 2: Test accuracy curves of different data filtration strategies on IMDB sentiment classification\ndataset. The x-axis records the number of effective training instances.\n20.00%\n\n12.00%\n\n15.00%\n\n14.00%\n\nFilter Ratio\n\n2.00%\n\n6.00%\n\nFy 2\n\nFa ED\n\nIteration Number\n\nNDF \u2014 REINFORCE\n\nNDE \u2014 ActorCritic\nFigure 3: Data filtration ratio during training LSTM with NDF-REINFORCE and NDF-ActorCritic\nolicies.\nTo better understand the learnt policies of NDF, in FigureB]we plot the ratio of filtered data instances\nper every certain number of iterations. It can be observed that more and more training data are kept\nduring the training process, which are consistent with the intuition of Curriculum Learning and Self-\nPaced Learning. Furthermore, the learnt feature weights for NDF policies (i.e. 6 in equation [2p are\nlisted in Tablef1] From the table, we can observe:\ne Longer movie reviews, with positive sentiments are likely to be kept.\n\ne Margin plays critical value in determining the importance of data. As reflected by its fairl\nlarge positive weights, training data with large margin is likely to be kept.\n\ne Note that the feature \u2014 log p, is the training loss, its negative weights mean that trainin\ninstances with larger loss values tend to be filtered, thus more and more data will be ker\nsince loss values get smaller and smaller during training, which is consistent with the curve\nTable 1: Feature weights learnt for NDF policies learnt in IMDB sentiment classification. The\nfirst row lists all the features (i.e., f(s)) categorized into the three classes described in Section 2]\n\nnormalized means the feature value is scaled between (0, 1]. [\u00a5o, y1] is the 1-of-2 representation for\nsentiment label."}, {"section_index": "5", "section_name": "3.3. IMAGE CLASSIFICATION ON CORRUPTED-MNIST", "section_text": "We further test different data filtration strategies for multilayer perceptron network training on im-\nage recognition task. The dataset we used is MNIST, which consists of 60k training and 10k testing\nimages of handwritten digits from 10 categories (i.e., 0, ---, 9). To further demonstrate the effec-\ntiveness of the proposed neural data filter in automatically choosing important instances for training,\nwe manually corrupt the original MNIST dataset by injecting some noises to the original pictures as\nfollows: We randomly split 60k training images into ten folds, and flip (\u00a2\u2014 1) x 10% randomly cho-\nsen pixels of each image in the i-th fold, i = 1,2,---, 10. The 10k test set are remained unchanged.\nFlipping a pixel means setting its value r as r = 1.0 \u2014 r. Such a corrupted dataset is named as\nC-MNIST. Some sampled images from C-MNIST are shown in Figure/4]\n\\ three-layer feedforward neural network with size 784 x 300 x 10 is used to classify the C-MNIS\nlataset. For data filtration policy, different from the single-layer logistic regression in equation |\nn this task, NDF-REINFORCE and NDF-ActorCritic leverage a three-layer neural network wit\nnodel size 24 x 12 x 1 as policy network, where the first layer node number 24 is the dimensic\nf state features f,|*| and sigmoid function is used as the activation function for the middle laye\n[Ok randomly selected images out of 60k training set acts as validation set to provide reward sis\n1als to NDF-REINFORCE and NDF-ActorCritic. For NDF-REINFORCE, the validation accurac\nhreshold is set as 7 = 0.90. For NDF-ActorCritic, the immediate reward is computed on the who!\nvalidation set. The episode number for policy training is set as L = 50 and we control trainin\nn each episode by early stopping based on validation set accuracy. We use Adam (Kingma & B\n\n1A ta antimize nolicyv netwark\nThe test set accuracy curves (averaged over five repeated runs) of different data filtration strategies\nare demonstrated in Figure|5] From Figure|5]we can observe:"}, {"section_index": "6", "section_name": "4 RELATED WORK", "section_text": "Plenty of previous works talk about data scheduling (e.g., filtration and ordering) strategies for ma-\nchine learning. A remarkable example is Curriculum Learning (CL) (Bengio et al.|{2009) showing\nthat a data order from easy instances to hard ones, a.k.a., a curriculum, benefits learning process.\nnormalized\n\naverage\n\nFeat sentenc historical | normalized loz loe loz aro} bias J\n\u2018eature yo yi sentence training iteration log po | logp: | \u2014logpy | margin | bias b\nlength arcirag\naccuracy\nNDF 0.03 | 0.82 0.12 0.11 0.53 0.26 0.06 0.22 1.10 2.18\n-REINFORCE | \"~ : . ~ a . : ~ . .\nNDF -0.08 | 0.77 0.20 -0.13 -0.61 0.20 0.04 -0.12 1.12 1.84\n\n-ActorCritic\nHowever, such a trend is diminished by the negative weight values for neural\n1etwork features, i.e., historical training accuracy and normalized iteration.\ne Similar to the result in IMDB sentiment classification, NDF-REINFORCE achieves the\nbest convergence speed;\n\ne The performance of NDF-ActorCritic is inferior to NDF-REINFORCE. In fact, NDF-\nActorCritic acts similar to sgd training without any data filtration. This further shows\nalthough Actor-Critic reduces variance compared with REINFORCE, the difficulty in de-\nsigning/training better critic functions hurts its performance.\n\u201cfs is similar to the features in Table |1} except that (yo, yi) and (log po, log p1) are switched inte\nyg) and (log po, ---, log po) respectively, given there are ten target cla:\nose\nBy\nSass\n3\n3\nx\na\n8 os\n&\nost\n0.90 <\n\u2018000 300000 Too0808 200000 Tra0000 1600000\nNumber of Training Instances\nRandomDropREINFORCE ~~~ SPL \u2014 NDF ActorCritic\n\nRandomDropActorCritic UnfilteredSGD \u2014\u2014 NDF- REINFORCE\nFigure 5: Test accuracy curves of different data filtration strategies on C-MNIST dataset. The x-axis\nrecords the number of effective training instances.\nFigure 4: Sampled pictures from C-MNIST dataset. Each row represents a corrupted fold in training\nset, with the percentage of flipped pixels growing from 0% (top row) to 90% (bottom row).\n095\n\n090\n\n\u2018000 300000 Too0808 200000 Tra0000 1600000\nNumber of Training Instances\nRandomDropREINFORCE ~~~ SPL \u2014 NDF ActorCritic\n\nRandomDropActorCritic --- UnfilteredSGD \u2014\u2014 NDF- REINFORCE\nThe measure of hardness in cL is typically determined by heuristic understandings of data\n\nST SOTO et oF . As a comparison, Self-Paced Learning\n(SPL) (Kumar et} Zab quantifies the hard-\n\nness by the loss on I In oa those training instances with loss values larger than a threshold\n7 will be neglected and 7 gradually increases in the training process such that finally all training\ninstances will play effects. Apparently SPL can be viewed as a data filtration strategy considered in\nthis paper.\nRecently researchers have noticed the importance of data scheduling for training Deep Neural Net\nwork models. For example, in (Loshchilov & Hutter| 2015p, a simple batch selection strategy basec\non the loss values of training data is proposed for speed up neural networks training. (Tsvetkor\n) leverages Bayesian Optimization to ae eee a aca function for training dis\ntributed word representations. The authors of 2016} investigated several hand\ncrafted criteria for data ordering in solving Ou A aswering tasks based on DNN. Our work:\ndiffers significantly with these works in that 1) We aim to filter data in randomly arrived mini-batche:\nin training process to save computational efforts, rather than actively select mini-batch; 2) We lever\nage reinforcement learning to automatically derive the optimal policy according to the feedback o\ntraining process, rather than use naive and heuristic rules.\nThe proposed Neural Data Filter (NDL) for data filtration is based on deep reinforcement learning\n(DRL) (Mnih et al. 2013} 2016}|Lillicrap et al.|/2015a}{Silver et al. 2016), which applies deep neu-\nral networks to reinforcement learning (Sutton & Barto}/1998). In particular, NDL belongs to policy\nbased reinforcement learning, seeking to search directly for optimal control policy. REINFORCE\nand actor-critic (Konda & Tsitsiklis} |1999) are two representative policy gradient\nalgorithms, with the difference that actor-critic adopts value function approximation to reduce the\nhich variance of policy sradient estimator in REINFORCE."}, {"section_index": "7", "section_name": "5 CONCLUSION", "section_text": "In this paper we introduce Neural Data Filter (NDF), a reinforcement learning framework to selec.\nt/filter data for training deep neural network. Experiments on several deep neural networks training\ndemonstrate that NDF boosts the convergence of Stochastic Gradient Descent. Going beyond dat\nfiltration, the proposed framework is able to supervise any sequential training process, thus opens <\nnew view for self-adaptively tuning/controlling machine learning process.\nAs to future work, on one aspect, we aim to test NDF to more tasks and models, such as Con-\nvolutional Neural Network (CNN) for image classification. We would also plan to give clearer\nexplanation on the behavior of NDF, such as what data is dropped at different phrases of training.\nand whether the proposed critic function is good enough. On the other aspect, we aim to apply\nsuch a reinforcement learning based teacher-student framework to other strategy design problems\nfor machine learning, such as hyper-parameter tuning, structure learning and distributed scheduling,\nwith the hope of providing better guidance for controlled training process."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Corinna Cortes, Mehryar Mohri, and Afshin Rostamizadeh. Multi-class classification with maxi\nmum margin multiple kernel. In JCML (3), pp. 46-54, 2013.\nAndrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in Neural Infor\nmation Processing Systems, pp. 3079-3087, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-\nnition. arXiv preprint arXiv: 1512.03385, 2015.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 9(8):\n1735-1780. 1997.\nLu Jiang, Deyu Meng, Teruko Mitamura, and Alexander G Hauptmann. Easy samples first: Self-\npaced reranking for zero-example multimedia search. In Proceedings of the 22nd ACM interna-\ntional conference on Multimedia, pp. 547-556. ACM, 2014a.\nAlex Krizhevsky. Learning multiple layers of features from tiny images. 2009\nM Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable\nmodels. In Advances in Neural Information Processing Systems, pp. 1189-1197, 2010.\nTimothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa.\nDavid Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv\npreprint arXiv:1509.02971, 2015a.\nTimothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa,\nDavid Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv\npreprint arXiv:1509.02971, 2015b.\nIlya Loshchilov and Frank Hutter. Online batch selection for faster training of neural networks.\narXiv preprint arXiv:1511.06343, 2015.\nAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher\nPotts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting\nof the Association for Computational Linguistics: Human Language Technologies, pp. 142-150,\nPortland, Oregon, USA, June 2011. Association for Computational Linguistics. URL |http:\n//www.aclweb.org/anthology/P11-1015\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,\nJulian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering\nthe game of go with deep neural networks and tree search. Nature, 529(7587):484\u2014489, 2016.\nNitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdino\\\nDropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learnin\nResearch, 15(1):1929-1958, 2014.\nShaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object\ndetection with region proposal networks. In Advances in neural information processing systems,\npp. 91-99, 2015.\nJames S Supancic and Deva Ramanan. Self- ~paced learning for long- -term tracking. In Proceedings\n\nnop oer nn om\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv: 1212.5701,\n2012."}]
ByOK0rwlx
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "t is widely believed that deeper networks tend to\nchieve better performance than shallow ones in vari-\nus computer vision tasks. As a trade-off of such im-\nressive improvements, deeper networks impose heavy\nomputational load both in terms of processing time\nind memory consumption due to an enormous amount\nf network parameters. For example, VGG-16 model\nSimonyan & Zisserman| 2015) requires about 528\nViBytes to store the network weights where fully con-\nected layers account for 89% of them. A large number\nf multiplications and additions must also be processed\nit each layer which prevent real-time processing, con-\nume vast amounts of electricity, and require a large\number of logic gates when implementing a deep net-\nvork on a FPGA or ASIC.\nThis article addresses the above issues. Specifically, we aimed to reduce the test-time computational\nload of a pre-trained network. Since our approach does not depend on a network configuration\n(e.g. a choice of an activation function, layer structures, and a number of neurons) and acts as a\npost-processing of network training, pre-trained networks shared in a download site of MatConvNet\ncan be compressed and accelerated. Our method\n\n(Vedaldi & Lenc}|2015) and Model Zoo\nis outlined in Figure[I] The main idea is to factorize both weights and activations into integer and\n\nnon-integer components. Our method is composed of two building blocks. as shown below."}, {"section_index": "1", "section_name": "TERNARY WEIGHT DECOMPOSITION AND BINARY AC-\nTIVATION ENCODING FOR FAST AND COMPACT NEU-\nRAL NETWORK", "section_text": "Takayoshi Yamashita & Hironobu Fujiyoshi\nfyamashita,hf}@cs.chubu.ac. jr"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "[his paper aims to reduce test-time computational load of a deep neural network.\nJnlike previous methods which factorize a weight matrix into multiple real-valued\nnatrices, our method factorizes both weights and activations into integer and non-\nnteger components. In our method, the real-valued weight matrix is approximated\nyy a multiplication of a ternary matrix and a real-valued co-efficient matrix. Since\nhe ternary matrix consists of three integer values, {\u20141, 0, +1}, it only consumes\n2 bits per element. At test-time, an activation vector that passed from a previous\nayer is also transformed into a weighted sum of binary vectors, {\u20141, +1}, which\nables fast feed-forward propagation based on simple logical operations: AND,\nXOR, and bit count. This makes it easier to deploy a deep network on low-power\n\u201cPUs or to design specialized hardware.\nIn our experiments, we tested our method on three different networks: a CNN for\nhandwritten digits, VGG-16 model for ImageNet classification, and VGG-Face for\nlarge-scale face recognition. In particular, when we applied our method to three\nfully connected layers in the VGG-16, 15x acceleration and memory compression\nup to 5.2% were achieved with only a 1.43% increase in the top-5 error. Our\nexperiments also revealed that compressing convolutional layers can accelerate\ninference of the entire network in exchange of slight increase in error.\nreal-valued\nactivation vector\n\nM,| real-valued\nreal-valued\nweight matrix\n\nEd\n\nComputable by\nXOR,AND,BitCount\n\nreal-valued\nFigure 1: Our network compression model\nTernary weight decomposition for memory compression: We introduce a factored representation\nwhere the real-valued weight matrix is approximated by a multiplication of a ternary basis matrix\nand a real-valued co-efficient matrix. While the ternary basis matrix is sufficiently informative tc\nreconstruct the original weights, it only consumes 2 bits per element. The number of rows of the co\nefficient matrix is also smaller than that of the original weight matrix. These compact representation:\nresult in efficient memory compression.\nBinary activation encoding for fast feed-forward propagation: It has been reported that an inne!\nproduct between a ternary and binary vector can be computed extremely fast by using three logica\noperations: AND, XOR, and bit count (Ambai & Sato| 2014). To use this technique, we approximat\u00e9\nthe activation vector by a weighted sum of binary vectors. This binary encoding must be processed a:\nfast as possible at test-time. To overcome this issue, we use a fast binary encoding method based on <\n\nsmall lookup table."}, {"section_index": "3", "section_name": "1.1 RELATED WORK", "section_text": "There have been extensive studies on accelerating and compressing deep neural networks, e.g., on\nan FFT-based method (Mathieu et al.||/2014), re-parameterization of a weight matrix (Yang et al.\n2015), pruning network connection (Han et al.|/2015} 2016), and hardware-specific optimization\n\n(Vanhoucke et al.||2011). In the following paragraphs, we only review previous studies that are\nintimately connected to ours.\nThere is an another series of studies, integer decomposition (Hare et al.| 2012} Yuji et al. (2014\n2014), which involved accelerating test-time speed of a classifier by using fas\nlogical operations. Although their contributions are limited to a shallow architecture such as a lineai\nSVM, they achieved a noticeable acceleration. In these approaches, a real-valued weight vecto!\nis approximated by a weighted sum of a few binary or ternary basis vectors. To use fast logica\n\noperations, they extracted binary features from an image. and 2014\n\nexploited binary basis vectors, and|Ambai & Sato) (2014) investigated a case of ternary basis tc\nimprove approximation quality.\nIn a manner of speaking, our method is a unified framework of matrix/tensor factorization and integer\ndecomposition reviewed in the above and inherits both their advantages. While the weight matrix is\nfactorized to exploit low-rank characteristics, the basis matrix is restricted to take only three integer\nvalues, {\u20141,0, +1}. In contrast to recent binary weighted networks such as XNOR-Net\nwhich quantizes both activations and weights during backpropagation, it is not necessary\nfor our method to change training algorithms at all. We can benefit from recent sophisticated training\n\ntechniques, e.g. batch normalization (Ioffe & Szegedy|/|2015), in combination with our method.\nFurthermore, our method does not need (iterative) end-t trai\n\nning which is needed for several\nprevious studies such as network pruning (Han et al.|\n\n) and distillation\nIn this section, we introduce our compression model and discuss time and space complexity. We\nconsider a convolutional layer with a filter size of w, x wy x c, where w, and wy are the spacial\nsize and c is a number of input channels. If w, = wy = 1, we can regard this layer as a fully\nconnected layer. This three dimensional volume is reshaped to form a D; dimensional vector where\nDy = wy X Wy xc. The filter weights and biases can be formulated by W \u20ac R?!*\"0 and b \u20ac R\u00b0.\nwhere Do is a number of output channels. Let x \u20ac R\u00ae: denote an activation vector obtained by\n) that network weights have a significant redundancy. Motivated\nby this fact, researchers have be: olved in a series of studies on matrix/tensor factorization\n(Jaderberg et al. 2014} Zhang et al.| 5). In these studies, a weight matrix (or tensor) was factorized\nby minimizing an approximation error of original weights or activations. (2014)\nexploited 1-D separable filter decomposition to accelerate feed-forward propagation.\n(2015) proposed low-rank approximation based on generalized SVD to compress an entire deep\nnetwork. Taking into account the lessons learned from these best practices, we also exploit the\nredundancy of the weights.\n\nIt was pointed out by\nTable 1: Number of operations\nTable 2: Memory consumption. Real value is represented in single precision (32 bits/element)\nvectorizing the corresponding three dimensional volume. In test-time, we need to compute Wix+\nfollowed by a non-linear activation function.\nIn our compressed network, W is decomposed into two matrices before test-time as follows:\nwhere M.,, \u20ac {\u20141,0, +1}2/**\u00bb is a ternary basis matrix, C,, \u20ac R*\u201c*?\u00b0 is a co-efficient matrix\nand k,, is the number of basis vectors, respectively. Since M,, only takes the three values, it\nconsumes only 2 bits per element. Setting a sufficiently small value to k,, further reduces total\nmemory consumption. From the viewpoint of approximation quality, it should be noted that a large\nnumber of elements in W takes close to zero values. To fit them well enough, a zero value must be\nincluded in the basis. The ternary basis satisfies this characteristic. In practice, the ternary basis gives\nbetter approximation than the binary basis. as we discuss in Section|3]\nThe activation vector x is also factored to the following form:\nwhere M,, \u20ac {\u20141, +1} Pr he is a binary basis matrix, c, \u20ac R*= is a real-valued co-efficient vecto'\nb, \u20ac Risa bias, and k,, is the number of basis vectors, respectively. Since elements of x are ofte:\nbiased, e.g., activations from ReLU take non-negative values and have a non-zero mean, b, is adde\nto this decomposition model. While c,, and b, reflect a range of activation values, M,, determine\napproximated activation values within the defined range. This factorization must be computed a\ntest-time because the intermediate activations depend on an input to the first layer. However, it\npractice, factorizing x into M,,c,, and b, requires an iterative optimization, which is very slov\nSince a scale of activation values within a layer is almost similar regardless of x, we pre-compute:\ncanonical c,, and b,, in advance and only optimized M.,, at test-time. As we discuss in Section\noptimal M., under fixed c, and b, can be selected using a lookup table resulting in fast factorizatiot\nW'x+be (M,C,)' (Meee + bel) +b = CLM]M,c, + brC,M)1 +b.\nA new bias by CMI +bin Eq.(3) is pre-computable in advance, because C,,, M, and b,, are\n\nfixed at test-time. It should be noted that MM, is a multiplication of the ternary and binary\n\nmatrix, which is efficiently computable using three logical operations: XOR, AND, and bit count\npois)\n\nas previously investigated (Ambai & Sato! After computing M!M,, the two co-efficient\ncomponents, c,, and C,,,, are multiplied from the right and left in this order. Since c, and C,,, are\nmuch smaller than W, the total number of floating point computations is drastically reduced.\nThe time and space complexity are summarized in Tables|I]and[2] As can be seen from Table[I] most\nof the floating operations are replaced with logical operations. In this table, B means the bit width\nof a variable used in the logical operations, e.g., B = 64 if a type of unsigned long long is used in\nC/C++ language. Table[2|suggests that if k,,, is sufficiently smaller than D; and Do, the total size of\nM,, and C,,, is reduced compared to the original parameterization.\noperation floating point logical\n\nmultiply-adds AND XOR bit count\n\noriginal (W ' x) D,Do 0 0 0\nproposed (CIM! Myc) kekw+kwDo (Drkekw)/B (Drkekw)/B (Drkrkw)/B\noriginal proposed\n\nvariables WwW Mw Cu Cr, bx\nsize (bits) 32-D;Do 2:Dyky 32-kyDo 32+ (ke +1)\n2\n\nMuCw,\nx = M,c, + 6,1,\nAlgorithm 1 Decompose W into M.,, and C,,\n\nRequire: W, k,,\nEnsure: factorized components M., and C,,.\n\n1: Re W\n2: for i+ 1tok,, do\n3: Initialize m) by three random values {\u20141,0, +1}.\n4: Minimize IIR mi \"ee ) | |3, by repeating the following two steps until convergence.\n5: [Step 1] cf) & mo) TR/mi DT mm\n6: [Step 2] mij \u2014 arg min ||rj \u2014 acl |2, for j =1,--- ,Dry\nae{-1,0,+1}\n7: Re R- mc)\n\n8: end for\nTo factorize W, we need to solve the following optimization problem.\nJw = vain, ||[W -M,,Cul|z.\nJO = min ||R-mMc||%\nmc?\nBinary decomposition for a given activation vector x can be performed by minimizing\nJn (Ma, Cx, be) = |[x \u2014 (Maen + by 1)||3.\nmi!) = argmin\nBe{-1,41} ke\nOur method makes this decomposition faster by pre-computing canonical c, and b,, from training\ndata and only optimizing M., at test-time using lookup table. This compromise is reasonable because\nof the following two reasons: (1) scale of activation values is similar regardless of vector elements\nHowever, the ternary constraint makes this optimization very difficult. Therefore, we take an\n\niterative approach that repeats rank-one approximation one by one, as shown in Algorithm|I] [1] Let\n\nm{) \u20ac {\u20141,0, +1}2/*? denote an i-th column vector of M,, and c{!) \u20ac R1*P0 denote an i-th row\n\nvector of C,, _ Instead of directly minimizing Eq. we iteratively solve the following rank-one\napproximation,\nwhere x; is a j-th element of x. Since k, is sufficiently small, 2\"* possible solutions can be\nexhaustively verified (in line 5 of Algorithm]2).\nwithin a layer, and (2) c, and b, reflect a scale of approximated activation values. Knowing these\n\nproperties, c,, and b, are obtained by minimizing Jy(Me, C,, bz; X) ,where X is constructed as\nfollows. First, N77 different activation vectors T \u20ac {x;} NT are collected from randomly chosen N--\ntraining data. Second, n elements are randomly sampled from x;. The sampled nV7 elements are\nconcatenated to form a vector X \u20ac RT. We use c, and b, as constants at test-time, and discard\n\nM,.\nyptimal solution m,\u00b0\u2019 can be efficiently found using a lookup table as follows.\n(LZ \u20141)(23 \u2014 Pmin)/(Pmax \u2014\n\nmin(max(|q + 1/2], 1), L).\n\nPmin) + 1"}, {"section_index": "4", "section_name": "5 EXPERIMENTS", "section_text": "We tested our method on three different convolutional neural networks: CNN for handwritten digit:\n\n(LeCun et al.|/1998), VGG-16 for ImageNet classification (Simonyan & Zisserman| 2015}, and VGG\nFace for Targe-scale face recognition (Parkhi et al.|/2015). To compute memory compression rate,\n\nsize of W and a total size of M.,, and C,,, were compared. To obtain a fair evaluation of computatiot\ntime, a test-time code of forward propagation was implemented without using any parallelizatiot\nscheme, e.g., multi-threading or SIMD, and was used for both compressed and uncompressec\nnetworks. The computation time includes both binary activation encoding and calculation of Eq. (5)\nWe used an Intel Core i7-5500U 2.40-GHz processor.\nMNIST is a database of handwritten digits which consists of 60000 training and 10000 test sets of\n28 x 28 gray-scale images with ground-truth labels from 0 to 9. We trained our CNN by using an\nexample code in MatConvNet 1.0-betal8 (Vedaldi & Lenc}|2015). Our architecture is similar to\nLeNet-5 but has a different number of input and output channels. Each layer\u2019s\n\nconfiguration is shown below:\nAt test-time, we only need to solve the optimization of Eq. for each \u00ab;. This can be regarded as\n\nthe nearest neighbour search in one-dimensional space. We call Gc, + b, a prototype. There are 2**\n\npossible prototypes because ( takes 2** possible combinations. The nearest prototype to x; and an\npay\nPreparing lookup table: We define L bins that evenly divide one-dimensional space in a range from\nthe smallest to largest prototype. Let #; denote a representative value of the /-th bin. This is located\nat the center of the bin. For each 2), we solve Ea. (7) and assign the solution to the bin.\nActivation encoding: At test-time, x; is quantized into L-levels. In other words, x; is transformed to\nan index of the lookup table. Let pmax \u201cand Pmin denote the largest and smallest prototype, respectively.\nWe transform ~. as follows:\nThe range from pmin tO Pmax is linearly mapped to the range from 1 to L by Eq. (8). The term qg\nis rounded and truncated from 1 to L by the max and min function in Eq. (9). If hi is sufficiently\nlarge, the solution assigned to the i-th bin can be regarded as a nearly optimal solution because the\n\ndifference between x; and the center of the bin \u00a2; becomes very small. We found that L = 4096 is\nsufficient. The time complexity of this encoding is O(D,).\nincrease in error rate (%)\n\n3.5\n\nk =D/5 k =Di/2\nCi wo\n\nTe\n\nincrease in error rate (%)\n\n3.5\n\na\n\niy\n\na\n\n05\n\n\u2014e\u2014 k,=1\n\n20 40\nmemory compression rate (%)\n(a) error vs. memory compression\n\n60 80\n\n1 2 3 4 5 6\nacceleration rate (x times faster)\n\n(b) error vs. acceleration\nFigure 2: Results of MNIST. The first fully connected layer was decomposed.\nwhere the parameters of a convolutional layer are denoted as (conv<receptive field size>-<number\nof output channels>), and parameters of a fully connected layer are denoted as (fe<number of input\n\nchannels>-<number of output channels>). The (maxpool) is 2 x 2 subsampling without overlapping.\nThe error rate of this network is 0.86%.\nWe applied our method to the first fully connected layer (fc 1024-640) and set n = 10 and N7 = 1001\nto learn c, and 6, from randomly chosen nN7 activations. The cases of k, = 1,2,3,4 anc\nkw = Do. Do/2, Do /5 were tested. This means that k,, was set to 640, 320, and 128.\na) and (b) show the relationships among the increases in error rates, memory compression\nrates, and acceleration rates. It was observed that error rates basically improved along with increasing\nk,, and saturated at k, = 4. It is interesting that k, = 2, only 2 bits per element for encoding an\nactivation x, still achieved good performance. While the smaller k,,, achieved better compression\nand acceleration rate, error rates rapidly increased when k,, = Do/5. One of the well balanced\nparameters was (kz, ky) = (4,Do/2) which resulted in 1.95x faster processing and a 34.4%\nmemory compression rate in exchange of a 0.19% increase in the error rate."}, {"section_index": "5", "section_name": ".2 WVGG-16 FOR IMAGENET CLASSIFICATION TASK", "section_text": "A dataset of ILSVRC2012 (Russakovsky et al.} |2015) consists of 1.2 million training, 50,000\n\nvalidation, and 100,000 test sets. Each image represents one of 1000 object categories. In this\nexperiment, we used a network model of VGG-16 (model D in (Simonyan & Zisserman| 2015)) that\nconsists of 13 convolutional layers and 3 fully connected layers followed by a softmax layer. The\narchitecture is shown below:\nFirst, all three fully connected layers were compressed with our algorithm. We set n = 10 and\nNy = 1000 to learn c, and b, from randomly chosen nN 7 activations. The cases of k, = 2,3, 4\nand ky = Do/2,Do/4, Do/8, Do/16 were tested. The case of k,, = 1 was omitted because this\nsetting resulted in a very high error rate. Note that each of the fully connected layers has different\nDo. The ky was independently set for each layer according to its Do. The top-5 error rates were\nevaluated on the validation dataset. The top-5 error rate of the original network is 13.4%.\nThe three lines with circles in Figure [3] show these results. It should be noted that much higher\nacceleration rates and smaller compression rates with small loss of accuracies were achieved than the\ncase of the network for MNIST. Interestingly, the case of k,,, = Do/4 still performed well due to the\nlow-rank characteristics of weights in the VGG-16 network.\nAlthough the error rates rapidly increased when k,,, took much smaller values, we found that this\ncould be improved by tuning k,,, of the third layer. More specifically, we additionally tested the\nincrease in top-5 error rate (%)\n\n30\n\n25\n\n20\n\n\u2014O\u2014 k=\n\u2014 Kk, = 4k, =D, in Fea)\n\nk =D /4\nwo\n\nincrease in top-5 error rate (%)\n\n30\n\n25\n\n20\n\n\u2014o\u2014 =\n\u2014 Kk, = 4k, =D, in FC3)\n\nk. =D/8\n\nA\n\n5 10 15\nmemory compression rate (%)\n\n(a) error vs. memory compression\n\n20\n\n10 20 30 40\nacceleration rate (x times faster)\n\n(b) error vs. acceleration\n\n50\n\u2018igure 3: Results of VGG-16. The last three fully connected layers were decomposed\nTable 3: Best balanced parameters for decomposing three fully connected layers of VGG-16\nOriginal Proposed\nTop-5 error (%) 13.4 14.8\n\nMBytes msec kw ky ratio\nfc25088-4096 392.0 1424 Do/8 4 11.1 2.8% 6.1 23.5x\nfc4096-4096 64.0 22.8 Do/8 4 8.5 13.3% 3.0 7.5\nfc4096-1000 15.6 5.7 Do 4 4.8 30.7% = 2.3 2.5\n\ntotal 471.6 = 170.9 24.4 5.2% 114 15.0x\n\nMBytes ratio msec\nTable 4: Reults of decomposing convolutional layers of VGG-16\nNext, we also tested to compress convolutional layers. In this experiment, k,,, and k,, were set to Do\nand 4. This setting accelerates each of the layers averagely 2.5 times faster. Table|4|shows positions\nof compressed layers, top-5 errors, and acceleration rates of the entire network. Although k,, and k,\nmust be larger than those of fully connected layers to avoid error propagation, it is still beneficial for\nentire acceleration. In summary, while compressing fully connected layers is beneficial for reducing\nmemory, compressing convolutional layers is beneficial for reducing entire computation time."}, {"section_index": "6", "section_name": "5.3. VGG-FACE FOR FACE RECOGNITION TASK", "section_text": "following cases. While k,, was set to Do/2,Do/4, Do/8, and Do/16 for the first and second\nlayers, ky was fixed to Do for the third layer. The k,, was set to 4. This is plotted with a red line\nin Figure[3] In this way, the memory compression rate and acceleration rate noticeably improved.\nSetting appropriate parameters for each layer is important to improve the total performance. Table\nshows the details of the best balanced case in which 15x faster processing and 5.2% compression\nrate were achieved in exchange of a 1.43% increase in error rate.\n\nS|\nincrease in EER (%)\n\n0 5 10 15 20\nmemory compression rate (%)\n\n(a) error vs. memory compression\n\nincrease in EER (%)\n\n10 20 30 40 50\nacceleration rate (x times faster)\n\n(b) error vs. acceleration\n\n60\nFigure 4: Results of VGG-Face. The last two fully connected layers were decomposed.\nTable 5: Reults of decomposing convolutional layers of VGG-Face.\nthem. In our experiment, we did not apply a descriptor embedding technique based on triplet los:\nminimization (Parkhi et al.| 2015). Following the evaluation protocol introduced in a previous pape!\n(Parkhi et al.|/2015), we used Labeled Faces in the Wild dataset (LFW) (Huang et al.||2007), whict\nincludes 13,233 face images with 5,749 identities. The LFW defines 1200 positive and 1200 negative\npairs for testing. We used the 2400 test pairs to compute ROC curve and equal error rate (EER). The\nEER is defined as an error rate at the ROC operating point where the false positive and false negative\nrates are equal. The EER of the original network is 3.8%.\nFirst, the two fully connected layers were compressed using our algorithm. We set n = 10 and\nNy = 1000 to learn c, and b, from randomly chosen nN7 activations. We tested the cases of\nky = 1,2,3,4, and ky = Do/2,Do/4,Do/8,Do/16. Figure|4]reveals an interesting fact that\neven the fastest and smallest network configuration, k,, = 1 and k,, = Do/16, had less impact on\nthe EER, in contrast to the previous ImageNet classification task in which the recognition results were\ncorrupted when k,, = 1. This indicates that the 4096-dimensional feature space is well preserved\nregardless of such coarse discretization of both weights and activations."}, {"section_index": "7", "section_name": "6 CONCLUSION", "section_text": "Next, we also tested to compress convolutional layers. In this experiment, k;,, and k,, were set to\nDo and 4 which are the the same setting used in Table[4] Table[5]shows positions of compressed\nlayers and EERs. The acceleration rates were almost the same as the results shown in Table[4] This is\nbecause architecture of VGG-face is the same as VGG-16 and we used the same parameter for k,,\nand k,,. Interestingly, compressing multiple layers from 2nd to 10th still preserves the original EER.\nAs can be seen from this table, our method works very well depending on a certain kind of machine\nlearning task.\nWe proposed a network compression model that consists of two components: ternary matrix decom-\nposition and binary activation encoding. Our experiments revealed that the proposed compression\nmodel is available not only for multi-class recognition but also for feature embedding. Since our\napproach is post-processing for a pre-trained model, it is promising that recent networks designed\nfor semantic segmentation, describing images, stereo matching, depth estimation, and much more\ncan also be compressed with our method. For future work, we plan to improve approximation error\nfurther by investigating the discrete optimization algorithm."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Misha Denil, Babak Shakibi, Laurent Dinh, Marc\u2019 Aurelio Ranzato, and Nando de Freitas. Predicting\nParameters in Deep Learning. NJPS, pp. 2148-2156, 2013.\nSong Han, Jeff Pool, John Tran, and William J Dally. Learning both Weights and Connections fot\nEfficient Neural Networks. NIPS, pp. 1135-1143, 2015.\nSam Hare, Amir Saffari, and Philip H. S. Torr. Efficient Online Structured Output Learning fo:\nKeypoint-Based Object Tracking. CVPR, pp. 1894-1901, 2012.\nGary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled Faces in the\nWild: a Database for Studying Face Recognition in Unconstrained Environments. University of\nMassachusetts Amherst Technical Report. (07-49). 2007.\nSergey loffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by\nReducing Internal Covariate Shift. In JCML, pp. 81-87, 2015.\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up Convolutional Neural Networks\nwith Low Rank Expansions. BMVC, 2014.\nYann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-Based Learning Appliec\nto Document Recognition. Proceedings of the IEEE, 86(11):2278\u20142323, 1998.\nOmkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. Deep Face Recognition. BMVC, 2015\nMohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet\nClassification Using Binary Convolutional Neural Networks. ECCV. pp. 525\u2014542. 2016.\nZichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu\nWang. Deep Fried Convnets. JCCV, pp. 1476-1483, 2015.\nYamauchi Yuji, Ambai Mitsuru, Sato Ikuro, Yoshida Yuichi, Fujiyoshi Hironobu, and Yamashita\nTakayoshi. Asymmetric Feature Representation for Object Recognition in Client Server System.\nACCYV, pp. 598-612, 2014.\nXiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating Very Deep Convolutiona\nNetworks for Classification and Detection. PAMI, 2015.\nSong Han, Huizi Mao, and William J. Dally. Deep Compression - Compressing Deep Neural\nNetworks with Pruning, Trained Quantization and Huffman Coding. CLR, 2016."}, {"section_index": "9", "section_name": "A. BINARY VS. TERNARY", "section_text": "reconstruction error J\n\n300\n* 250\n200\n150\n100\n\n50\n\nbinary basi:\nternary basis\n\n1/2D D 3/2D\n\u00b0 \u00b0 \u00b0\n\nnumber of basis vectors k |\n\n2D\nFigure 5: 4096 x 1000 weight matrix of last fully connected layer in VGG-16 model (Simonyan\n2015) is decomposed under two different constraints: (blue) {\u20141, +1} and (red)\n\u2018igure[Sjillustrates the reconstruction errors of a 4096 x 1000 weight matrix of the last fully connectec\nayer in VGG-16 model (Simonyan & Zisserman| 2015). We tested both the binary and ternary\nonstraints on ML, for comparison. The reconstruction error J,, monotonically decreased along with\nn increase in k,,,. It was clear that the ternary basis provided better reconstruction than the binary\nasis."}]
HJ7O61Yxe
[{"section_index": "0", "section_name": "MODELING RELATIONAL TIME SERIES USING GAUS-\nSIAN EMBEDDINGS", "section_text": "Ludovic Dos Santos* Ludovic Denoyer, Benjamin Piwowarski & Patrick Gallinari"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Relational time series, i.e. multiple time series where the observations are correlated both inside\neach series and between series occur in many domains such as ecology, medicine, biology, earth\nobservation by satellite imagery or local measurements, multimedia or even social data analysis.\nThe correlations between the different observed series can come from a proximity (e.g. earth obser-\nvation or epidemic diffusion) or from a similarity of behavior (e.g. user traces in social data). In the\nstatistical literature, the modeling of relational time series has been the topic of a dedicated field:\nspatio-temporal statistics (Cressie & Wikle (2011); Wikle & Hooten (2010)). Different method-\nologies have been developed for handling a large variety of spatio-temporal phenomena, with an\nemphasis on the analysis of natural observations like weather prediction, ecology or remote sensing.\nIn the machine learning domain, there exists a vast literature dedicated to sequence or time series\nprediction. Recently, deep recurrent neural networks have witnessed notable successes in different\nsequence and time series modeling tasks leading to an increasing number of publications, e.g. (Bar-\nbounis et al. (2006); Hsieh et al. (2011); Cao et al. (2012); Hermans & Schrauwen (2013)). Despite\na large number of recent developments, the modeling and analysis of relational time series has only\nattracted a few attention in the field of representation learning. In addition, most of the models are\ndeterministic in the sense that they are trained to learn a fixed mapping for modeling the dynamics\nof the series.\nWe propose a new state space model for relational time series able to model the uncertainty at the\nobservation and at the modeling levels. The principle of this approach is to associate each point o!\na time series to a Gaussian distribution in a latent space, the distribution over the observed value:\nbeing directly computed from these latent distributions. The model has two main components. On\u00ab\nis responsible for the dynamics in the latent space. This component is thus modeling the evolutior\nof the Gaussian distribution considering both the temporal intra-series and the relational inter-serie:\n\u201cBoth authors contributed equally to this worl"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We address the problem of modeling multiple simultaneous time series where the\nobservations are correlated not only inside each series, but among the different\nseries. This problem happens in many domains such as ecology, meteorology, etc.\nWe propose a new dynamical state space model, based on representation learn-\nng, for modeling the evolution of such series. The joint relational and temporal\nlynamics of the series are modeled as Gaussian distributions in a latent space. A\n1ecoder maps the latent representations to the observations. The two components\ndynamic model and decoder) are jointly trained. Using stochastic representations\nillows us to model the uncertainty inherent to observations and to predict unob-\nserved values together with a confidence in the prediction.\ndependencies. A second component acts as a decoder and maps the latent representations associate\nwith each series to the corresponding observations in the output space.\nThe contributions of the paper are thus: (i) a new dynamical model for relational time series in-\nspired by representation learning; (ii) a stochastic component for modeling the uncertainties at the\nobservation and dynamic levels\nThe paper is organized as follows. In Section 2 we introduce some related work on forecasting\nin time series, representation learning for time series, and recent deep learning works focusing or\nmodeling uncertainty. The model is presented in Section 3 together with four different variants\nSection 4 presents experimental results on four datasets, and section 5 concludes this work anc\ngives some perspectives.\nThe classical topic of time series modeling and forecasting has given rise to an extensive literature.\nIn statistics, classical linear models include many variations around auto-regressive and moving\naverage models (De Gooijer & Hyndman (2006)). In machine learning, non linear extensions of\nthese models based on neural networks have been proposed as early as the 90s, opening the way to\nmany other non linear models including kernel methods (Muller et al. (99)).\nRelational time series have mainly been studied in the field of spatio-temporal statistics (Cressie &\nWikle (2011); Wikle & Hooten (2010)). The traditional method first relied on a descriptive approact\nusing the first and second-order moments of the process for modeling the spatio-temporal dependen.\ncies. More recently, dynamical state models, where the current state is conditioned on the past have\nbeen explored (Wikle (2015)). These models have been considered both for continuous/discrete\nspace and time components. However, the most common way is to consider discrete time, leading\nto the modeling of time series of spatial processes as we do here. When space is discrete, the mode\ncomes down to a general vectorial autoregressive formulation. These models face a curse of dimen.\nsionality in the case of a large number of sources. Different strategies have been adopted to solve this\nproblem such as embedding the spatio-temporal process in a low-dimensional manifold or param.\neter reduction (Wikle (2015)), leading to model families quite similar to the ones used in machine\nlearning for modeling dynamical phenomena. Also, for complex underlying processes, observation:\nonly provide an incomplete description of the process dynamics so that modeling uncertainty at the\ndata and model levels is an important topic.\nIn the last 10 years, there has been a growing interest in learning latent representations for example\nthrough neural networks and deep learning. Dynamical state space models such as recurrent neural\nnetworks (RNN), which have been used for time series forecasting in different contexts since the\nearly nineties (Connor et al. (1994)), have recently witnessed important successes in different areas\nfor general sequence modeling problems, leading to breakthroughs in domains like speech (Graves\net al. (2013)), language generation (Sutskever et al. (2011)), translation (Cho et al. (2014)), and\nmany others. Among this family, the model closest to ours is the dynamic factor graph model of\n(Mirowski & LeCun (2009)) designed for multiple series modeling for the tasks of forecasting and\nimputation. However this model does not consider relational dependencies which is the focus of ou\napproach.\nMost of the above models make use of pointwise representations and do not model explicitly th\nuncertainties present in the process and/or in the observations. Recently, in the learning repre\nsentation community, there has been a growing interest in using distributions as latent representa\ntions instead of points. (Vilnis & McCallum (2015); He et al. (2015); Dos Santos et al. (2016)) al\nmake use of Gaussian distributions for representing different items like words (Vilnis & McCallun\n(2015)), nodes in knowledge graphs (He et al. (2015)) or nodes in graphs for transductive classifi\ncation (Dos Santos et al. (2016)). Note that Gaussian processes have also been used for time serie:\nprediction, but they have mainly been considered for univariate time series prediction (Hachino\nKadirkamanathan (2011); Brahim-Belhouari & Bermak (2004)) and they do not use a state spac:\nformulation.\nRecent techniques in variational inference (Kingma & Welling (2014); Rezende et al. (2014)) deal\nwith uncertainty by modeling distributions in the observation space, mapping random variables\nwithin a latent space to observations with a deep neural network. Extension of the variational in-\nference method to time series has been proposed (Fraccaro et al. (2016); Krishnan et al. (2015)) bu\ncontrarily to those works, we take into account relationships (both temporal and relational). Fur-\nthermore, in our model, we work directly with random variables to predict observations from time\nseries. This gives us direct access to the output distribution with no need to sample or work with\nintractable distributions.\nOur model is built on top of the model in (Ziat et al. (2016)) which proposes a deterministic dy-\nnamical process model but does not consider any explicit modeling of uncertainty. In this paper, we\npropose a model that uses Gaussian embeddings, and extend the dynamics and loss functions of the\nmodel in (Ziat et al. (2016)).\nLet us consider a set of m temporal sequences! x1,..,Xn such that x} \u2018 & R is the value of the ith\n\nsequence at time t\u00a2 defined by xj = (x pO, as al? ), T being the number of observed time steps. For\nsimplification, we consider that all the series have the same length, but this is not restrictive.\nWe model the dependencies between the different series through a graph, the different series source\nbeing the graph vertices and the links modeling explicit dependencies between the sources. Thes\nlinks can reflect a spatial proximity between the sources of the series, a similarity of behavior be\ntween users or any other predefined relation. These explicit relations will be modeled in the laten\nspace. Our hypothesis is that they will constrain the representation of linked sources to be simila\none to another in the latent space, this similarity being controlled by the strength of the link betwee:\nthe two time series, denoted e;,;. We assume that the graph structure is static in time and is provide\nas a prior information. The model can be extended to learn these static dependencies but this is no\nconsidered here.\nLet us denote 7 the size of the prediction horizon. The forecasting problem considered here is to\n\ncompute for all series i the values. aT th) for all & in [1;7]. Note that the model can be straightfor-\nwardly extended to the imputation problem that aims at predicting missing values."}, {"section_index": "3", "section_name": "3.2 INFORMAL DESCRIPTION", "section_text": "The proposed model is a dynamic state space model: the dynamics is modeled in a continuous laten\nstate space and the observations are generated from states in this latent space. State space model:\nhave already been considered for multiple time series (e.g. Mirowski & LeCun (2009)) and fo:\nspatio-temporal processes (e.g. Wikle & Hooten (2010)).\nTo handle this uncertainty, we propose a model, namely Relational Dynamic model with Gaussian\nrepresentations (RDG), that represents latent factors as distributions in a latent space and learns the\nseries dynamics in this latent space. The distributions themselves are estimated using observations\nlike for any other representation learning model. Besides being more adapted to handling the noise\ninherent to the process and to the observations, the model can be used to predict the posterior distri-\nbution of the variables associated to the series and in particular the confidence or variance associated\nto the predictions.\nThe model is an extension of the deterministic model of (Ziat et al. (2016)) and has two main\ncomponents: (i) Decoding component: we consider that each series corresponds to a particular\ntrajectory in an unknown latent space. Each series a), dn al?) is thus associated to a series of\n(1)\n\nrandom variables in R? denoted Z;\"\u2019,...., Zl ; Zo being the latent factor explaining the observed\n\nvalue of the series 7 at time t and d the size of the latent space. We model each Zo as a multivariate\n'For simplicity, we consider univariate time series, but the model can be trivially extended to multivariate\ntime series.\nBoth the observations and the dynamics are subject to uncertainties. Usually, the observations cor-\nrespond to a partial view of the underlying generating process and the dynamics being hidden is not\ndirectly accessible and should be modeled as a stochastic process.\nnormal variable NV (ui? ; ul). The observation can be computed from this latent distribution by\nusing a decoding function mapping Zo to x) = f(ZE?). (ii) Dynamic component: The\nsecond component models the series dynamics in the latent space. We suppose that dynamics can\nbe captured for all series through a function h that maps the latent random variable Z, \u00a9) to the next\n\nlatent variable Zen = n( Ze). The function h is thus modeling the time dynamics. In addition,\nconstraints are introduced to reflect prior knowledge about the relational dependency structure of\nthe series. For any couple of series 7 and j with a known dependency, i.e. such that e;,; > 0 we add\n\na corresponding constraint on ZO and Zo as explained in Section 3.3.3.\nIn the following, we explain how the distributions corresponding to the random variables Zy? are\nlearned, jointly to the functions f (decoder component) and h (dynamic component)."}, {"section_index": "4", "section_name": "3.3. MODEL DEFINITION", "section_text": "n T-1\n\nApel f(Z0?), 26?) + Aby 9) YY) Avy (Zi? n(Z2?))\n\ni=1t=1\n\nMa\n\nL(u,d, fh) =\n\nn T\n\n+ rw) Ye jpAn(Zi?, 21)\n\nj=lt=1\nwhere Apy and Ar are hyperparameters weighting the importance of the different elements in the los:\nfunction. The first term corresponds to the decoding component, and forces both f and the learnec\ndistributions of variables Z to \u201cexplain\u201d the observations, the second term, the dynamic component\nencourages h to model the time dynamics in the latent space, while the third term captures the\nrelations between the pairs of series. In the following, we use for f a linear function and / will be\neither a linear or non-linear function (see Section 3.3.2).\nLearning: Learning the model is performed through the minimization of the loss function\nL(y, X, f,h) with respect to , &, f and h. To simplify the notations, the parameters of f and\nh are not made explicit in the notations \u2014 f and h are supposed to be differentiable. At the end of\nthe learning process, all the latent distributions for each of the time steps are known for the training\ndata, as well as the decoding function f and the dynamical one h. We used ADAM (Kingma & Ba\n(2015)) as a stochastic gradient descent technique. This optimization can be easily made on a large\nscale dataset, and/or by using GPUs."}, {"section_index": "5", "section_name": "3.3.1 FROM LATENT SPACE TO OBSERVATIONS", "section_text": "The first loss measures the difference between the expected value of f and the observation using\nmean-caiare error\nWe define a global loss function L(u, \u00a9, f, h) where j: and \u00a9 are the means and covariance matrices\nfor all the series and for all the time steps between 1 and T\u2019. The loss is a sum of three terms: (i) a\ndecoding loss Ap, (ii) a dynamical loss Apy and (iii) a structural loss Apr:\nThe mapping onto the latent space is learned so that the values al of each series can be predicted\nfrom their respective Gaussian embedding Zz) through the f function. We define below two al-\nternative decoding loss functions Ape, used in the experiments for measuring the error between the\npredicted distribution f(Z) and the observation a, Other losses could be used with the same\nmodel.\nApe (\u00a3(Z4),20) & (2 [ p(1)] \u2014 20)\u201d\nWhen considering a linear decoding function such as f(-) =< 6,- > , @ being the set of parameters\nof f, Ape, can be rewritten as as:\nWhen f is a linear function, this loss can be written a:\n077,\n\nt\n\n+ (< a, 0? > \u2014x)"}, {"section_index": "6", "section_name": "3.3.2 MODELING DYNAMICS", "section_text": "predict the representation of the next state of time series 2, Z* *\n\n\u2018The function ) maps a dis-\nAny, (Ze, h(Z)) & De (Ze? ||\\9Z) = Dc (Z PN (ye? 797)\n\ni\nApy, (Za Z)) \u00a9 Die (ZEOP |M a (Wl? BO), neh? D0)))\n\ni\nAt last, Ar corresponds to a structural regularization over the graph structure that encourages th\nmodel to learn similar representations for time series that are interdependent. This forces the mode\nto learn representations that reflect the structure dependencies between the series. Recall that thes\n\u201c1 1 p(t)\n*Dicr(Z)|1Z)?) = per(BS BL) + (Ws? \u2014 WP PB!) (Hy \u2014 wl) ~ dow Fe)\n) = (<p? > \u2014al)?\nThe second loss aims at measuring the distance between the random variable modeling the predicted\nobservations and the observations. This is the expectation of the mean squared error between the\npredictions and the observations:\nApes (F(Z), 00) \u00a9 B [(f(Z?) = 2))?|\nMinimizing Ape, only updates the mean of the distributions, whereas minimizing Ap-, updates both\nthe mean and the variance. More specifically, an observed value with Ape, will pull the variances\nthe mean and the variance. More specifically, an observed value with Ape, will pull the variances\n\nof) down. This is an interesting property since observing values should reduce the variance of the\nrepresentation. Moreover, this effect will be higher for the dimensions of the latent space where the\nvalue of 6 is higher. This is sensible since variance is reduced for the dimensions that are important\nfor the prediction.\nThe loss function Apy aims at finding values ZO and a dynamic model h, that will be used to\npredict the representation of the next state of time series i, ZOD, The function h maps a dis-\ntribution N (uO, 20) to Nu, EOD), Based on (Vilnis & McCallum (2015); Dos Santos\n\net al. (2016)), we use a Kullback-Leibler divergence (noted Dx ;,(-||-)) to compare the distribution\nat (t + 1) to the distribution predicted by h.\nWe propose in the following two alternative functions for h. For the first one, we consider that the\nlatent representation at time (t + 1) is a linear transformation of the latent distribution at time \u00a2. The\ntransformed variable is also a Gaussian and its parameters can be easily computed. In this case, h is\na linear function from R? to R@ which is represented by a matrix y \u20ac M,a(R):\nLinear transformations of random vectors might be too restrictive to model complex processes.\nAs an alternative transformation, we used two non linear multilayer perceptrons (MLP), one h\u2019\u2122\u201d\nfor predicting the means and one for h\u00b0 for predicting the variance: the next mean is given by\n\nLe (H+1) = h\u2122 (ps () JD), and the next variance by pet) = ne(n, wo), This gives:\nNote hat in the second case, we also make the hypothesis that the resulting distribution (for Z,\nis Gaussian. In the two cases, the KL divergence between the two Gaussian distributions has a\nsimple analytic form from which the gradient can be easily computed?.\n\n(bri)\ndependencies are supposed to be provided as priors for this model. We define this regularization loss\nMinimizing the regularization term Ag has a direct impact on the distributions of the predicted\nobservations for connected times series. More precisely, we have the following inequality:\n(t)\nd- Dy r(Z\\\\2Z0)\ndry (#(2.\u00b0), f(Z\u00a7)) < \\]) AS\nwith X and Y being to random variables of density distribution respectively Dx and Dy, and\nBorel being the Borel set of R\u201d (roughly, cuboids in R\u201d). This means that having relatively similar\nrepresentations (regarding the KL-divergence) constrains the predicted values to be similar. For\nmore details see Appendix A."}, {"section_index": "7", "section_name": "4.1 DATASETS AND BASELINES", "section_text": "Experiments have been performed on four datasets respectively extracted from Google Flu Trends\u201d,\nWHO? and from two datasets from Grand Lyon? (GL) (respectively data from traffic conditions\nand from car parks occupancy). All the series are normalized. For all datasets, we used binary\ndependency relations indicating whether two series are related or not. The Google Flu Trend\n(GFT) dataset is composed of an aggregation of weekly Google search queries related to the flu in\n29 countries. This dataset spans about ten years of time. The binary relations between series are\ndefined a priori so that the series of two countries 7 and j are linked, i.e. e;,; = 1 in Equation (1),\nonly if the countries have a common frontier. There are 96 relations in all. The GL Traffic (GL-T)\ndataset corresponds to the traffic conditions of the 50 busiest roads of the city of Lyon (France).\nData is aggregated on 20 minutes windows spanning 15 days. The binary relations between series\nare based on the geographical proximity of roads. There are 130 relations in total. The GL Park\n(GL-P) dataset represents the occupancy of public car parks in Lyon. The series correspond to the\noccupancy of the 30 busiest car parks. It has the same window and period of time as the previous\ndataset, and the binary relations between series are based on the geographical proximity of car\nparks. There are 74 relations in total. The WHO dataset provides the number of deaths caused by\ndiphtheria over 91 different countries, giving rise to 91 time series. The binary relations between\nseries are defined so that two series are linked if the corresponding countries share a common\nfrontier. There are 228 links in total.\nWe compare our approach with five baselines : Auto-Regressive (AR), a monovariate linea\nauto-regressive model. It computes its predictions based on a learned linear function of a fixec\nnumber p of past values of the series. The order p of the model is a hyperparameter of the mode\nselected by a grid search. Feed Forward Neural Network (FFNN), representative of non-lineat\n\u201chttp://www. google.org/flutrends\n\u2018http://www.who.int\nShttp://data.grandlyon.com\nAg( Z| 20?) = Der ( ZZ)\nIpy (X,Y) = sup (|Dx(A) \u2014 Dy(A)))\n\nAeBorel\nDuring inference when forecasting values, the latent distributions at (T+ 1) are deduced from the\n\nones at time T and follow N(n(u ; he )), distributions at (T + 2) follow (ho n(u?, =)),\nand so on...\naon\n\n(a) RMSE from T+1 to T+5 on GL-T.\n\nfd ol el\nFigure 1: Quantitative comparison between baselines and our proposed model (RDG) on the predic-\ntion task. RDG; ; corresponds to the variant with losses (Ape, ,Apy, )-\nauto-regressive models of order p where the non-linear function is modeled as a feed-forward neura\nnetwork with one hidden layer of size s. In this case, p and s are hyperparameters selected by gric\nsearch. RNN, a recurrent neural network with one hidden layer of size s of recurrent units and tanl\nnon-linearities. The RNN model is a state space non-linear auto-regressive model with exogenou:\ninputs (the past values of the series). Note that this model should in principle be able to lear\nthe inter-series dependencies, but the dependencies are not modeled explicitly as they are in ou\nmodel. Also the RNN does not introduce explicit modeling of uncertainties. KF (Kalman (1960))\nis a classic Kalman Filter with linear transformations from one state to another. DFG (Mirowsk\n& LeCun (2009)), a state of the art model that learns continuous deterministic latent variable:\nby modeling the dynamics and the joint probabilities between series. All the hyperparameters o\nthe baselines have been set using a validation set by grid search, including the best architecture:\nfor the dynamic model h when it is a multi-layer perceptron with one hidden layer or a linear model.\nFor the evaluation we have considered a root-mean-square error (RMSE) criterion. Regarding the\nexperimental protocol, models are evaluated using cross-validation with rolling origin."}, {"section_index": "8", "section_name": "4.2 RESULTS", "section_text": "Let us first present the performance of our model w.r.t. the baselines for prediction at horizon 1 ir\nFigure 1b We have tested the four variants of our approach i.e combinations of Ape, or Ape, with\nApy, or Apy,,. The proposed model obtains the best results on all the datasets except GFT where KF\nperforms better. Otherwise it outperforms the baselines on two datasets (GL-P -Grand Lyon Parks-\nand GFT -Google Flu Trends- on the table) and gets results similar to the RNN on the two other:\n(GL-T -Grand yon Traffic- and WHO). The non linear dynamical model used for Apy, usually gets\nbetter results than other models, the best combination being the use of the MSE expectation erro1\nfor the decoder and the non-linear model for the dynamics (denoted RDG\u00bb 2 on the figure). The\nnon linear dynamical model used for Apy, usually gets better results than other models, the bes'\ncombination being the use of the MSE expectation error for the decoder and the non-linear mode!\nfor the dynamics (denoted RDG\u00bb \u00bb on the figure).\nFigure la shows the prediction quality (RMSE) at (7'+ 1), (+2), (+3), (T+4) and (7'+5) and\nillustrates the ability of RDG to predict correctly at different horizons. Here again, the performance\nof RDG is very close to the performance of the Recurrent Neural Network. One can remark that at\n\n(T + 5) KF does not goes the distance since it performs well at (T+ 1) but quite badly at (T+ 5)\nin comparison to other baselines.\nRDG has the additional property of modeling the uncertainty associated to its predictions, which is\nnot the case for a RNN. Let us consider the curves presented in Figure 2. They illustrate, the pre-\ndictions made by our model together with their associated variance computed through the Gaussian\nembeddings. First, one can see that the ground truth values are always within the confidence interval\nprovided by our model, which means that RDG computes relevant minimum and maximum possible\nvalues. Another aspect is that the size of the interval increases with the prediction horizon, which is\n(a) RMSE from T+1 to T+5 on GL-T.\n\nModel GL-T | GL-P GFT WHO\nAR 0.0752 | 0.0892 | 0.0626 | 0.0832\nFFNN 0.0751 | 0.0894 | 0.045 | 0.0838\nRNN 0.0709 | 0.0890 | 0.0431 | 0.0795\nKF 0.0711 | 0.0833 | 0.0388 | 0.0799\nDFG 0.0712 | 0.0911 | 0.0592 | 0.0795\nRDG; 1 || 0.0742 | 0.0902 | 0.0607 | 0.0848\nRDG; 2 || 0.0707 | 0.0834 | 0.0434 | 0.0796\nRDG2,1 || 0.0765 | 0.0896 | 0.0589 | 0.0831\nRDG22 || 0.0718 | 0.0828 | 0.0429 | 0.0795\n(b) RMSE at T+1 on the four datasets.\n950\n\n04s\n\n0.40\n\n035\n\n030\n\n025\n\n020\n\nos\n\n010\n\n005,\n\n06\n\n=\" grounattruth\n\u2014 prediction +- variance\n<= _ prediction test\n\n05]\n\n=\" grounattruth\n\n\u2014 prediction +- variance\n\n<:+ prediction test\n\n04,\n\n35\nFigure 2: Forecasts on GFT (two different time series of the dataset) with the RDG2 2 model showing\nits range of confidence: E( f(Z\u201c))+var( f(Z\u2122)). Prediction at 25+n corresponds to f(h\"(Z2)).\nwhat is expected from such a model. The latter is then able to predict relevant confidence values fot\nits predictions.\nComparison between RDG with/without structural regularization or uncertainty We com\npare in Table 1 the results between our model when taking into account the neighborhood grap!\n(Ar # 0) or not (Ag = 0): forecasts are uniformly worse for all datasets when we do not tak\ninto account the neighborhood graph, it suggests that the regularizer improves the model when thi\ninput graph is relevant. Furthermore, we give the results obtained without uncertainty, which cor\nresponds to the model described in (Ziat et al. (2016)) (denoted Rainstorm): here again, our mode\noutperforms the previous one for all the datasets.\nDataset\nModel GL-T | GL-P | GET | WHO\nRainstorm || 0.0710 | 0.0886 | 0.0440 | 0.0804\nRDG (Ar = 0) |] 0.0719 | 0.900 | 0.0441 | 0.0807\nRDG 0.0707 | 0.0828 | 0.0388 | 0.0795\nTable 1: RMSE at T' + 1 on the four datasets"}, {"section_index": "9", "section_name": "\u2019 CONCLUSION AND FUTURE WORK", "section_text": "We have proposed a model for relational time series forecasting. Our model (RDG) is based o1\nlatent Gaussian embeddings, and has shown competitive performance on four different dataset\ncompared to state-of-the-art models. Moreover, RDG allows us to model the uncertainty of predic\ntions, providing for example confidence intervals for each prediction. Future work will investigat\nmore complex dynamic and prediction functions, as well as observing the behavior of the model fo\nimputation tasks."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "TG Barbounis, JB Theocharis, MC Alexiadis, and PS Dokopoulos. Long-term wind speed and\npower forecasting using local recurrent neural network models. JEEE TEC, 2006.\nNoel A. C. Cressie and Christopher K. Wikle. Statistics for spatio-temporal data. Wiley series in\nprobability and statistics. Hoboken, N.J. Wiley, 2011. ISBN 978-0-47 1-69274-4.\nJan G De Gooijer and Rob J Hyndman. 25 years of time series forecasting. International journal of\nforecasting. 2006.\nLudovic Dos Santos, Benjamin Piwowarski, and Patrick Gallinari. Multilabel classification on het\nerogeneous graphs with gaussian embeddings. In ECML-KDD, 2016.\nAlan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-\nrent neural networks. In JJJE ICASSP, 2013.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. JCLR, 2015.\nYP Kingma and M Welling. Auto-encoding variational bayes. In JCLR, 2014.\nRahul G Krishnan, Uri Shalit, and David Sontag. Deep kalman filters. NJPS 2015 Workshop, 2015\nKR Muller, A J Smola, G Ratsch, B Scholkopf, J Kohlmorgen, and V Vapnik. Using support vector\nmachines for time series prediction. Kernel methods\u2014support vector learning, 99.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and\nvariational inference in deep latent gaussian models. In International Conference on Machine\nLearning, 2014.\nLuke Vilnis and Andrew McCallum. Word representations via gaussian embedding. JCLR, 2015\nChristopher K Wikle and Mevin B Hooten. A general science-based framework for dynamica\nspatio-temporal models. Test, 19(3):417-451, 2010.\nAli Ziat, Gabriella Contardo, Nicolas Baskiotis, and Ludovic Denoyer. Learning embeddings fot\ncompletion and prediction of relational multivariate time-series. In ESANN, 2016.\nJerome T Connor, R Douglas Martin, and Les E Atlas. Recurrent neural networks and robust time\nseries prediction. Neural Networks, IEEE Transactions on. 1994.\nTJ Hsieh, HF Hsiao, and WC Yeh. Forecasting stock markets using wavelet transforms and recurrent\nneural networks: An integrated system based on artificial bee colony algorithm. Applied soft\ncomputing, 2011.\nRudolph Emil Kalman. A new approach to linear filtering and prediction problems. Transactions of\nthe ASME-\u2014Journal of Basic Engineering, 82(Series D):35\u201445, 1960."}, {"section_index": "11", "section_name": ". IMPACT OF MINIMIZING THE KL-DIVERGENCE ON PREDICTED VALUES", "section_text": "In this section, we show that the structural regularization term between two time series bounds the\ndifference predicted observations.\nSince we use diagonal covariance matrices and that the KL-divergence is invariant by multiplying\nboth random variables by the same scalar, we can show that:\nThen, using Pinsker\u2019s inequality one can see that minimizing the KL-divergence also minimize the\ntotal variation norm (which can be more intuitive in some cases), leading to:\nd\n\nd\n2)) (drv(OZ12.0252)) < Di Pel 22164252)\nk=1 hel\nwith dry being the total variation distance of probability measures.\nFinally, each component of the random vectors Z\u2018) being pairwise independent, we have:\nCombining the the inequalities above, we can straightforwardly show the following inequality:\n(t))) y(t)\nd- Dei (Z;\"||Z;\")\n2\nd\nDer(Zh|\\Z4) = Ss LZENZQ = Y Dev OeZ QZ)\nk=1 k=1\nale\n\nd \u201cad\nk=1\n\nk=1\na"}]
BJ6oOfqge
[{"section_index": "0", "section_name": "TEMPORAL ENSEMBLING FOR SEMI-SUPERVISED\nLEARNING", "section_text": "Samuli Laine\nslaine@nvidia.com\nIn this paper, we present a simple and efficient method for training deep neural\nnetworks in a semi-supervised setting where only a small portion of training date\nis labeled. We introduce self-ensembling, where we form a consensus predictior\nof the unknown labels using the outputs of the network-in-training on different\nepochs, and most importantly, under different regularization and input augmenta-\ntion conditions. This ensemble prediction can be expected to be a better predictot\nfor the unknown labels than the output of the network at the most recent training\nepoch, and can thus be used as a target for training. Using our method, we set\nnew records for two standard semi-supervised learning benchmarks, reducing the\n(non-augmented) classification error rate from 18.44% to 7.05% in SVHN witk\n500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further\nto 5.12% and 12.16% by enabling the standard augmentations. We additionally\nobtain a clear improvement in CIFAR-100 classification accuracy by using ran-\ndom images from the Tiny Images dataset as unlabeled extra inputs during train-\ning. Finally, we demonstrate good tolerance to incorrect labels."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "It has long been known that an ensemble of multiple neural networks generally yields better pre-\ndictions than a single network in the ensemble. This effect has also been indirectly exploited when\ntraining a single network through dropout (Srivastava et al., 2014), dropconnect (Wan et al., 2013),\nor stochastic depth (Huang et al., 2016) regularization methods, and in swapout networks (Singh\net al., 2016), where training always focuses on a particular subset of the network, and thus the com-\nplete network can be seen as an implicit ensemble of such trained sub-networks. We extend this idea\nby forming ensemble predictions during training, using the outputs of a single network on different\ntraining epochs and under different regularization and input augmentation conditions. Our train-\ning still operates on a single network, but the predictions made on different epochs correspond to an\nensemble prediction of a large number of individual sub-networks because of dropout regularization.\nThis ensemble prediction can be exploited for semi-supervised learning where only a small portion\nof training data is labeled. If we compare the ensemble prediction to the current output of the net-\nwork being trained, the ensemble prediction is likely to be closer to the correct, unknown labels of\nthe unlabeled inputs. Therefore the labels inferred this way can be used as training targets for the\nunlabeled inputs. Our method relies heavily on dropout regularization and versatile input augmen-\ntation. Indeed, without neither, there would be much less reason to place confidence in whatever\nlabels are inferred for the unlabeled training data.\nWe describe two ways to implement self-ensembling, II-model and temporal ensembling. Both ap-\nproaches surpass prior state-of-the-art results in semi-supervised learning by a considerable margin.\nWe furthermore observe that self-ensembling improves the classification accuracy in fully labeled\ncases as well, and provides tolerance against incorrect labels.\nThe recently introduced transform/stability loss of Sajjadi et al. (2016b) is based on the same prin-\nciple as our work, and the II-model can be seen as a special case of it. The II-model can also be\nseen as a simplification of the '-model of the ladder network by Rasmus et al. (2015), a previously\npresented network architecture for semi-supervised learning. Our temporal ensembling method has\nconnections to the bootstrapping method of Reed et al. (2014) targeted for training with noisy labels.\ntaila@nvidia.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "w(t)\nJi ooops >| cross- Y\n. NO stochastic [\u2014\u2014 >| network * >|_sntropy \u00bb| weighted 1\u2014\u2014> loss\naugmentation |__\u00bb] with dropout ~ >! squared sum\nZi \u00bb| difference\nTemporal ensembling\nw(t)\nMV jvrrrrccrnrrrcnnnrrennnrcen nnn nnnaennn tenn nncennncocsnnnacsnnnecsnenenss >| cross- t\n. entropy\nstochastic network Zi > weighted\nXx; \u2014\u2014> : F 9 +\u2014> loss\naugmentation with dropout sum\nsquared\nZ; \u00bb| difference\n\n\u00bb>Z:\nFigure 1: Structure of the training pass in our methods. Top: Il-model. Bottom: temporal en-\nsembling. Labels y; are available only for the labeled inputs, and the associated cross-entropy loss\ncomponent is evaluated only for those.\nAlgorithm 1 [I-model pseudocode.\noe\u201d ne OT\nRequire: x; = training stimuli\nRequire: L = set of training input indices with known labels\nRequire: y; = labels for labeled inputs i \u20ac L\nRequire: w(t) = unsupervised weight ramp-up function\nRequire: stochastic neural network with trainable parameters 0\nRequire: g(x) = stochastic input augmentation function\n\nfor \u00a2 in (1, num_epochs] do\n\nfor each minibatch B do\n\nzien < fo(g(XieB)) > evaluate network outputs for augmented inputs\nZiew folg (tiep)) > again, with different dropout and augmentation\nloss + \u2014 ral Vie(won) log z;[y:] > supervised loss component\n+u(t ora Dies lle \u2014 Hill? > unsupervised loss component\nupdate @ using, e.g., ADAM > update network parameters\nend for\nend for\n\nreturn 6\nWe present two implementations of self-ensembling during training. The first one, [l-model, en-\ncourages consistent network output between two realizations of the same input stimulus, under two\ndifferent dropout conditions. The second method, temporal ensembling, simplifies and extends this\nby taking into account the network predictions over multiple previous training epochs.\nWe shall describe our methods in the context of traditional image classification networks. Let the\ntraining data consist of total of N inputs, out of which M are labeled. The input stimuli, available\nfor all training data, are denoted x;, where i \u20ac {1... N}. Let set L contain the indices of the labelec\ninputs, |L| = M. For every i \u20ac L, we have a known correct label y; \u20ac {1...C}, where C is the\nnumber of different classes."}, {"section_index": "3", "section_name": "2.1 TI-MODEL", "section_text": "The structure of II-model is shown in Figure | (top), and the pseudocode in Algorithm 1. Durin;\ntraining, we evaluate the network for each training input x; twice, resulting in prediction vectors z\nand Z;. Our loss function consists of two components. The first component is the standard cross\nentropy loss, evaluated for labeled inputs only. The second component, evaluated for all inputs\npenalizes different predictions for the same training input x; by taking the mean square differenc\u00ab\nIt is important to notice that, because of dropout regularization, the network output during training\nis a stochastic variable. Thus two evaluations of the same input x; under same network weights 6\nyield different results. In addition, Gaussian noise and augmentations such as random translatior\nare evaluated twice, resulting in additional variation. The combination of these effects explains\nthe difference between the prediction vectors z; and Z;. This difference can be seen as an error ir\nclassification, given that the original input x; was the same, and thus minimizing it is a reasonable\ngoal.\nIn our implementation, the unsupervised loss weighting function w(t) ramps up, starting from zero\nalong a Gaussian curve during the first 80 training epochs. See Appendix A for further details abou\nthis and other training parameters. In the beginning the total loss and the learning gradients are thu:\ndominated by the supervised loss component, i.e., the labeled data only. We have found it to be\nvery important that the ramp-up of the unsupervised loss component is slow enough\u2014otherwise\nthe network gets easily stuck in a degenerate solution where no meaningful classification of the date\nis obtained.\nOur approach is somewhat similar to the [-model of the ladder network by Rasmus et al. (2015), bu\nconceptually simpler. In the II-model, the comparison is done directly on network outputs, i.e., after\nsoftmax activation, and there is no auxiliary mapping between the two branches such as the learned\ndenoising functions in the ladder network architecture. Furthermore, instead of having one \u201cclean\u201d\nand one \u201ccorrupted\u201d branch as in T\u2019-model, we apply equal augmentation and noise to the inputs fot\nboth branches.\nAs shown in Section 3, the [I-model combined with a good convolutional network architecture\nprovides a significant improvement over prior art in classification accuracy.\nAnalyzing how the Il-model works, we could equally well split the evaluation of the two branches i\ntwo separate phases: first classifying the training set once without updating the weights 0, and the\ntraining the network on the same inputs under different augmentations and dropout, using the ju:\nobtained predictions as targets for the unsupervised loss component. As the training targets obtaine\nthis way are based on a single evaluation of the network, they can be expected to be noisy. Temporz\nensembling alleviates this by aggregating the predictions of multiple previous network evaluation\ninto an ensemble prediction. It also lets us evaluate the network only once during training, gainin\nan approximate 2x speedup over the II-model.\nThe structure of our temporal ensembling method is shown in Figure 1 (bottom), and the pseudocode\nin Algorithm 2. The main difference to the [l-model is that the network and augmentations ar\u00a2\nevaluated only once per input per epoch, and the target vectors Z for the unsupervised loss componen\nare based on prior network evaluations instead of a second evaluation of the network.\nAfter every training epoch, the network outputs z; are accumulated into ensemble outputs Z; by\nupdating Z; ~ aZ; + (1 \u2014 a@)z;, where a is a momentum term that controls how far the ensembl\nreaches into training history. Because of dropout regularization and stochastic augmentation, Z thu:\ncontains a weighted average of the outputs of an ensemble of networks f from previous trainin;\nepochs, with recent epochs having larger weight than distant epochs. For generating the trainin;\ntargets Z, we need to correct for the startup bias in Z by dividing by factor (1 \u2014 a\u2019). A simila\nbias correction has been used in, e.g., Adam (Kingma & Ba, 2014) and mean-only batch normal\nization (Salimans & Kingma, 2016). On the first training epoch, Z and 2 are zero as no data fron\nprevious epochs is available. For this reason, we specify the unsupervised weight ramp-up functior\nw(t) to also be zero on the first training epoch.\n\u201cSquared difference gave slightly but consistently better results than cross-entropy lo:\nbetween the prediction vectors z; and 2;.' To combine the supervised and unsupervised loss terms,\nwe scale the latter by time-dependent weighting function w(t). By comparing the entire output\nvectors z; and Z;, we effectively ask the \u201cdark knowledge\u201d (Hinton et al., 2015) between the two\nevaluations to be close, which is a much stronger requirement compared to asking that only the final\nclassification remains the same, which is what happens in traditional training.\nAlgorithm 2 Temporal ensembling pseudocode. Note that the updates of Z and Z could equally\nwell be done inside the minibatch loop; in this pseudocode they occur between epochs for clarity.\nRequire: 2; = training stimuli\nRequire: L = set of training input indices with known labels\nRequire: y; = labels for labeled inputs i \u20ac L\nRequire: a =ensembling momentum, 0 < a < 1\nRequire: w(t) = unsupervised weight ramp-up function\nRequire: f(x) = stochastic neural network with trainable parameters 6\nRequire: g(x) = stochastic input augmentation function\nZ\u2014 Owxe] > initialize ensemble predictions\nZz \u2014 Owxe] > initialize target vectors\nfor \u00a2 in [1, nwum_epochs] do\nfor each minibatch B do\n\nzies < fo(g(xies, t)) > evaluate network outputs for augmented input\nloss + \u2014 TET Vie(ent) log z;[y:] > supervised loss component\n+ w(t) ZIBT Dies lle \u2014 Hill? > unsupervised loss component\nupdate @ using, e.g., ADAM > update network parameters\nend for\nZ-aZ+(1l-a)z > accumulate ensemble predictions\nz+ Z/(1-a\u2019) > construct target vectors by bias correction\nend for\n\nreturn 6\nThe benefits of temporal ensembling compared to [[-model are twofold. First, the training is faste:\nbecause the network is evaluated only once per input on each epoch. Second, the training target:\nZ can be expected to be less noisy than with H-model. As shown in Section 3, we indeed obtait\nsomewhat better results with temporal ensembling than with I]-model in the same number of trainin;\nepochs. The downside compared to I-model is the need to store auxiliary data across epochs, anc\nthe new hyperparameter a. While the matrix Z can be fairly large when the dataset contains a larg\nnumber of items and categories, its elements are accessed relatively infrequently. Thus it can be\nstored, e.g., in a memory mapped file.\nAn intriguing additional possibility of temporal ensembling is collecting other statistics from the\nnetwork predictions z; besides the mean. For example, by tracking the second raw moment of\nthe network outputs, we can estimate the variance of each output component z;,;. This makes it\npossible to reason about the uncertainty of network outputs in a principled way (Gal & Ghahramani,\n2016). Based on this information, we could, e.g., place more weight on more certain predictions\nvs. uncertain ones in the unsupervised loss term. However, we leave the exploration of these avenues\nas future work."}, {"section_index": "4", "section_name": "3. RESULTS", "section_text": "Our network structure is given in Table 5, and the test setup and all training parameters are detailed\nin Appendix A. We test the [I-model and temporal ensembling in two image classification tasks.\nCIFAR-10 and SVHN, and report the mean and standard deviation of 10 runs using different random\nseeds.\nAlthough it is rarely stated explicitly, we believe that our comparison methods do not use input aug\nmentation, i.e., are limited to dropout and other forms of permutation-invariant noise. Therefore w\nreport the error rates without augmentation, unless explicitly stated otherwise. Given that the abilit\nof an algorithm to extract benefit from augmentation is also an important property, we report th\nclassification accuracy using a standard set of augmentations as well. In purely supervised trainin\nthe de facto standard way of augmenting the CIFAR-10 dataset includes horizontal flips and randor\ntranslations, while SVHN is limited to random translations. By using these same augmentations w\ncan compare against the best fully supervised results as well. After all, the fully supervised result\nshould indicate the upper bound of obtainable accuracy.\nTable 1: CIFAR-10 results with 4000 labels, averages of 10 runs (4 runs for all labels)\nError rate (%) with # labels\n\n4000 All (50000)\n\nSupervised-only 35.56 + 1.59 7.33 + 0.04\n\nwith augmentation 34.85 + 1.65 6.05 + 0.15\nConvy-Large, \u2019-model (Rasmus et al., 2015) 20.40 + 0.47\nCatGAN (Springenberg, 2016) 19.58 + 0.58\nGAN of Salimans et al. (2016) 18.63 + 2.32\n\nTI-model 16.55 + 0.29 6.90 + 0.07\n\nI]-model with augmentation 12.36 + 0.31 5.56 + 0.10\n\nTemporal ensembling with augmentation 12.16 + 0.24 5.60 + 0.10\nTable 2: SVHN results for 500 and 1000 labels, averages of 10 runs (4 runs for all labels).\nError rate (%) with # labels\n\nModel 500 1000 All (73257)\n\nSupervised-only 35.18 + 5.61 3.05 + 0.07\nwith augmentation 31.59 + 3.60 2.88 + 0.03\n\nDGN (Kingma et al., 2014)\n\nVirtual Adversarial (Miyato et al., 2016)\n\nADGM (Maalge et al., 2016)\n\nSDGM (Maalge et al., 2016)\n\nGAN of Salimans et al. (2016) 18.44 + 4.8\n\nI-model 7.05 + 0.30 2.78 + 0.03\nII-model with augmentation 6.65 + 0.53 2.54 + 0.04\nTemporal ensembling with augmentation 5.12 + 0.13 2.74 + 0.06\nCIFAR-10 is a dataset consisting of 32 x 32 pixel RGB images from ten classes. Table 1 shows a\n2.1 percentage point reduction in classification error rate with 4000 labels (400 per class) compared\nto earlier methods for the non-augmented II-model.\nEnabling the standard set of augmentations further reduces the error rate by 4.2 percentage point:\nto 12.36%. Temporal ensembling is slightly better still at 12.16%, while being twice as fast t\ntrain. This small improvement conceals the subtle fact that random horizontal flips need to be done\nindependently for each epoch in temporal ensembling, while II-model can randomize once per \u00ab\npair of evaluations, which according to our measurements is ~0.5 percentage points better that\nindependent flips.\nA principled comparison with Sajjadi et al. (2016b) is difficult due to several reasons. They provide\nresults only for a fairly extreme set of augmentations (translations, flipping, rotations, stretching.\nand shearing) on top of fractional max pooling (Graham, 2014), which introduces random, local\nstretching inside the network, and is known to improve classification results substantially. They\nquote an error rate of only 13.60% for supervised-only training with 4000 labels, while our cor-\nresponding baseline is 34.85%. This gap indicates a huge benefit from versatile augmentations\nand fractional max pooling\u2014in fact, their baseline result is already better than any previous semi-\nsupervised results. By enabling semi-supervised learning they achieve a 17% drop in classification\nerror rate (from 13.60% to 11.29%), while we see a much larger relative drop of 65% (from 34.85%\nto 12.16%).\nThe street view house numbers (SVHN) dataset consists of 32 x 32 pixel RGB images of real-world\nhouse numbers, and the task is to classify the centermost digit. In SVHN we chose to use only the\n6.90 + 0.07\nt 0.10\n0.10\n18.44+4.8\nILRAA+AR\n. aa JU\nTable 3: CIFAR-100 results with 10000 labels, averages of 10 runs (4 runs for all labels).\nError rate (%) with # labels\n\n10000 All (50000)\n\nSupervised-only 51.214 29.14 + 0.25\nwith augmentation 44.56 26.42 + 0.17\nI]-model 5 29.06 + 0.21\nI-model with augmentation 26.32 + 0.04\nTemporal ensembling with augmentation 38.65 26.30 + 0.15\nTable 4: CIFAR-100 + Tiny Images results, averages of 10 runs.\nError rate (%) with # unlabeled\nauxiliary inputs from Tiny Images\nRandom 500k Restricted 237k\n\nII-model with augmentation 25.79 + 0.17 25.43 + 0.32\nTemporal ensembling with augmentation 23.62 + 0.23 23.79\nofficial 73257 training examples following Salimans et al. (2016). Even with this choice our errc\nrate with all labels is only 3.05% without augmentation.\nWe also investigated the behavior with 500 labels, where we obtained an error rate less than hal!\nof Salimans et al. (2016) without augmentations, with a significantly lower standard deviation a:\nwell. When augmentations were enabled, temporal ensembling further reduced the error rate tc\n5.12%. In this test the difference between I-model and temporal ensembling was quite significan'\nat 1.5 percentage points.\nIn SVHN Sajjadi et al. (2016b) provide results without augmentation, with the caveat that they\nuse fractional max pooling, which is a very augmentation-like technique due to the random, local\nstretching it introduces inside the network. It leads to a superb error rate of 2.28% in supervised-\nonly training, while our corresponding baseline is 3.05% (or 2.88% with translations). Given that\nin a separate experiment our network matched the best published result for non-augmented SVHN\nwhen extra data is used (1.69% from Lee et al. (2015)), this gap is quite surprising, and leads us to\nconclude that fractional max pooling leads to a powerful augmentation of the dataset, well beyond\nwhat simple translations can achieve. Our temporal ensembling technique obtains better error rates\nfor both 500 and 1000 labels (5.12% and 4.42%, respectively) compared to the 6.03% reported by\nSajjadi et al. for 732 labels."}, {"section_index": "5", "section_name": "3.3. CIFAR-100 AND TINY IMAGES", "section_text": "The CIFAR-100 dataset consists of 32 x 32 pixel RGB images from a hundred classes. We ar\nnot aware of previous semi-supervised results in this dataset, and chose 10000 labels for our ex\nperiments. Table 3 shows error rates of 43.43% and 38.65% without and with augmentation, re\nspectively. These correspond to 7.8 and 5.9 percentage point improvements compared to supervise\u00a2\nlearning with labeled inputs only.\nWe ran two additional tests using unlabeled extra data from Tiny Images dataset (Torralba et al.,\n2008): one with randomly selected 500k extra images, most not corresponding to any of the CIFAR-\n100 categories, and another with a restricted set of 237k images from the categories that correspond\nto those found in the CIFAR-100 dataset (see appendix A for details). The results are shown in\nTable 4. The addition of randomly selected, unlabeled extra images improved the error rate by 2.7\npercentage points (from 26.30% to 23.63%), indicating a desirable ability to learn from random\nnatural images. Temporal ensembling benefited much more from the extra data than the [I-model.\nInterestingly, restricting the extra data to categories that are present in CIFAR-100 did not improve\nTable 2 compares our method to the previous state-of-the-art. With the most commonly used 1000\nlabels we observe an improvement of 2.7 percentage points, from 8.11% to 5.43% without augmen-\ntation, and further to 4.42% with standard augmentations.\nClassification accuracy (%)\n\nStandard supervised\n\n100\n\nTemporal ensembling\n\n90\n80\n70\n60\n\nri\n\n(I)\n\nepoch\n\n\u20140% \u201420% \u201450% \u201480% \u201490%\n\n300\n\n40\n\n50\n30\n20\n10\n\n0\n\nph\n\n1 epoch\n\n\u20140% \u201420% \u201450% \u201480% \u201490%\nClassification accuracy (%)\n\n90\n80\n70\n60\n5\n\nSF Fri\n\n(I)\n\nepoch\n\n\u20140% \u201420% \u201450% \u201480% \u201490%\n\n300\n\n0\n40\n30\n20\n10\n0\n\nph\n\n1 epoch\n\n\u20140% \u201420% \u201450% \u201480% \u201490%\n\n30\u00a2\nthe classification accuracy further. This indicates that in order to train a better classifier by adding\nextra data as unlabeled inputs, it is enough to have the extra data roughly in the same space as the\nactual inputs\u2014in our case, natural images. We hypothesize that it may even be possible to us\u00a2\nproperly crafted synthetic data as unlabeled inputs to obtain improved classifiers.\nIn order to keep the training times tolerable, we limited the number of unlabeled inputs to 50k per\nepoch in these tests, i.e., on every epoch we trained using all 50k labeled inputs from CIFAR-100 anc\n50k additional unlabeled inputs from Tiny Images. The 50k unlabeled inputs were chosen randomly\non each epoch from the 500k or 237k extra inputs. In temporal ensembling, after each epoch we\nupdated only the rows of Z that corresponded to inputs used on that epoch.\nWhen all labels are used for traditional supervised training, our network approximately matches\nthe state-of-the-art error rate for a single model in CIFAR-10 with augmentation (Lee et al., 2015:\nMishkin & Matas, 2016) at 6.05%, and without augmentation (Salimans & Kingma, 2016) at 7.33%,\nThe same is probably true for SVHN as well, but there the best published results rely on extra date\nthat we chose not to use.\nGiven this premise, it is perhaps somewhat surprising that our methods reduce the error rate also\nwhen all labels are used (Tables 1 and 2). We believe that this is an indication that the consis-\ntency requirement adds a degree of resistance to ambiguous labels that are fairly common in many\nclassification tasks, and that it encourages features to be more invariant to stochastic sampling.\nIn a further test we studied the hypothesis that our methods add tolerance to incorrect labels by\nassigning a random label to a certain percentage of the training set before starting to train. Figure 2\nshows the classification error graphs for standard supervised training and temporal ensembling.\nClearly our methods provide considerable resistance to wrong labels, and we believe this is because\nthe unsupervised loss term encourages the mapping function implemented by the network to be\nflat in the vicinity of all input data points, whereas the supervised loss term enforces the mapping\nfunction to have a specific value in the vicinity of the labeled input data points. This means tha\neven the wrongly labeled inputs play a role in shaping the mapping function\u2014the unsupervisec\nloss term smooths the mapping function and thus also the decision boundaries, effectively fusing\nthe inputs into coherent clusters, whereas the excess of correct labels in each class is sufficient for\nlocking the clusters to the right output vectors through the supervised loss term. The difference tc\nclassical regularizers is that we induce smoothness only on the manifold of likely inputs insteac\nFigure 2: Percentage of correct SVHN classifications as a function of training epoch when a part of\nthe labels is randomized. With standard supervised training (left) the classification accuracy suffers\nwhen even a small portion of the labels give disinformation, and the situation worsens quickly as\nthe portion of randomized labels increases to 50% or more. On the other hand, temporal ensembling\n(right) shows almost perfect resistance to disinformation when half of the labels are random, and\nretains over ninety percent classification accuracy even when 80% of the labels are random.\nof over the entire input domain. For further analysis about the importance of the gradient of the\nmapping function, see Simard et al. (1998)."}, {"section_index": "6", "section_name": "4 RELATED WORK", "section_text": "There is a large body of previous work on semi-supervised learning (Zhu, 2005). In here we wi\nconcentrate on the ones that are most directly connected to our work.\n[-model is a subset of a ladder network (Rasmus et al., 2015) that introduces lateral connections int\nan encoder-decoder type network architecture, targeted at semi-supervised learning. In [-model, al\nput the highest lateral connections in the ladder network are removed, and after pruning the un\nnecessary stages, the remaining network consists of two parallel, identical branches. One of th\nbranches takes the original training inputs, whereas the other branch is given the same input cor\nrupted with noise. The unsupervised loss term is computed as the squared difference between th\n(pre-activation) output of the clean branch and a denoised (pre-activation) output of the corrupte:\nbranch. The denoised estimate is computed from the output of the corrupted branch using a para\nmetric nonlinearity that has 10 auxiliary trainable parameters per unit. Our [l-model differs fron\nthe I'-model in removing the parametric nonlinearity and denoising, having two corrupted paths\nand comparing the outputs of the network instead of pre-activation data of the final layer.\nSajjadi et al. (2016b) recently introduced a new loss function for semi-supervised learning, so callec\ntransform/stability loss, which is founded on the same principle as our work. During training, they\nrun augmentation and network evaluation n times for each minibatch, and then compute an unsu:\npervised loss term as the sum of all pairwise squared distances between the obtained n network\noutputs. As such, their technique follows the general pseudo-ensemble agreement (PEA) regular:\nization framework of Bachman et al. (2014). In addition, they employ a mutual exclusivity los:\nterm (Sajjadi et al., 2016a) that we do not use. Our I-model can be seen as a special case of the\ntransform/stability loss obtained by setting n = 2. The computational cost of training with trans:\nform/stability loss increases linearly as a function of n, whereas the efficiency of our tempora\nensembling technique remains constant regardless of how large effective ensemble we obtain via the\naveraging of previous epochs\u2019 predictions.\nIn bootstrap aggregating, or bagging, multiple networks are trained independently based on subsets\nof training data (Breiman, 1996). This results in an ensemble that is more stable and accurate\nthan the individual networks. Our approach can be seen as pulling the predictions from an implicit\nensemble that is based on a single network, and the variability is a result of evaluating it under\ndifferent dropout and augmentation conditions instead of training on different subsets of data. In\nwork parallel to ours, Huang et al. (2017) store multiple snapshots of the network during training,\nhopefully corresponding to different local minima, and use them as an explicit ensemble.\nThe general technique of inferring new labels from partially labeled data is often referred to as boot-\nstrapping or self-training, and it was first proposed by Yarowsky (1995) in the context of linguistic\nanalysis. Whitney & Sarkar (2012) analyze Yarowsky\u2019s algorithm and propose a novel graph-basec\nlabel propagation approach. Similarly, label propagation methods (Zhu & Ghahramani, 2002) infer\nlabels for unlabeled training data by comparing the associated inputs to labeled training inputs using\na suitable distance metric. Our approach differs from this in two important ways. Firstly, we nevet\ncompare training inputs against each other, but instead only rely on the unknown labels remaining\nconstant, and secondly, we let the network produce the likely classifications for the unlabeled inputs\ninstead of providing them through an outside process.\nGenerative Adversarial Networks (GAN) have been recently used for semi-supervised learning with\npromising results (Maalge et al., 2016; Springenberg, 2016; Odena, 2016; Salimans et al., 2016). It\nIn addition to partially labeled data, considerable amount of effort has been put into dealing with\ndensely but inaccurately labeled data. This can be seen as a semi-supervised learning task where part\nof the training process is to identify the labels that are not to be trusted. For recent work in this area,\nsee, e.g., Sukhbaatar et al. (2014) and Patrini et al. (2016). In this context of noisy labels, Reed et al.\n(2014) presented a simple bootstrapping method that trains a classifier with the target composed of\na convex combination of the previous epoch output and the known but potentially noisy labels. Our\ntemporal ensembling differs from this by taking into account the evaluations over multiple epochs.\nTable 5: The network architecture used in all of our tests.\ncould be an interesting avenue for future work to incorporate a generative component to our solution.\nWe also envision that our methods could be applied to regression-type learning tasks.\nWe thank the anonymous reviewers, Tero Karras, Pekka Janis, Tim Salimans, Ian Goodfellow, as\nwell as Harri Valpola and his colleagues at Curious AI for valuable suggestions that helped to im-\nprove this article."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Leo Breiman. Bagging predictors. Machine Learning, 24(2), 1996.\nSander Dieleman, Jan Schliiter, Colin Raffel, Eben Olson, Sgren Kaae Sgnderby, et al. Lasagne:\nFirst release., 2015.\nBenjamin Graham. Fractional max-pooling. CoRR, abs/1412.6071, 2014\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassins\nhuman-level performance on imagenet classification. CoRR, abs/1502.01852, 2015.\nGao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks witt\nstochastic depth. CoRR, abs/1603.09382, 2016.\nNAME\n\nDESCRIPTION\n\ninput\nnoise\nconvla\nconvlb\nconvlc\n\npoo.\n\npoo.\n\npoo.\n\ndrop!\nconv2a\nconv2b\nconv2c\n\ndrop2\nconv3a\nconv3b\nconv3c\n\n1\n\n2\n\n3\n\ndense\noutput\n\n32 x 32 RGB image\nAdditive Gaussian noise g = 0.15\n\n128 fi\n128 fi\n128 fi\n\nMaxpool 2 x 2 pixels\nDropout, p = 0.5\n\n256 fi\n256 fi\n256 fi\n\nMaxpool 2 x 2 pixels\nDropout, p = 0.5\n\n512 fi\n256 fi\n128 fi\n\nGlobal average\n\nters, 3 x 3, pad\nters, 3 x 3, pad\nters, 3 x 3, pad\n\nters, 3 x 3, pad\nters, 3 x 3, pad\nters, 3 x 3, pad\n\nters, 3 x 3, pad\n\n= \u2019same\u2019, LReLU (a = 0.\n*same\u2019, LReLU (a = 0.\n*same\u2019, LReLU (a = 0.\n\n= \u2019same\u2019, LReLU (a = 0.\n= \u2019same\u2019, LReLU (a = 0.\n= \u2019same\u2019, LReLU (a = 0.\n\n= \u2019valid\u2019, LReLU (a = 0.\n\nters, 1 x 1, LReLU (a = 0.1)\nters, 1 x 1, LReLU (a = 0.1)\n\npool (6 x 6 + 1x1 pixels)\n\nFully connected 128 > 10\nSoftmax\n\nVo\nYarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model\nuncertainty in deep learning. CoRR, abs/1506.02142, 2016.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoR\nabs/1412.6980, 2014.\nLars Maalge, Casper Kaae Sgnderby, Sgren Kaae Sgnderby, and Ole Winther. Auxiliary deep gen-\nerative models. CoRR, abs/1602.05473, 2016.\nTakeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distribution\nsmoothing with virtual adversarial training. In Proc. International Conference on Learning Rep\nresentations (ICLR), 2016.\nAugustus Odena. Semi-supervised learning with generative adversarial networks. Data Efficier\nMachine Learning workshop at ICML 2016. 2016.\nTim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization t\naccelerate training of deep neural networks. CoRR, abs/1602.07868, 2016.\nTim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen\nImproved techniques for training GANs. CoRR, abs/1606.03498. 2016.\nSaurabh Singh, Derek Hoiem, and David A. Forsyth. Swapout: Learning an ensemble of deep\narchitectures. CoRR, abs/1605.06465. 2016.\nJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. Striving\nfor simplicity: The all convolutional net. CoRR, abs/1412.6806, 2014.\nGiorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, and Lizhen Qu. Making neural\nnetworks robust to label noise: a loss correction approach. CoRR, abs/1609.03683, 2016.\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-\nsupervised learning with ladder networks. In Advances in Neural Information Processing Systems\n28 (NIPS). 2015.\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov\nDropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learnin;\nResearch, 15:1929-1958, 2014.\nSainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training\nconvolutional networks with noisy labels. CoRR, abs/1406.2080, 2014.\nXiaojin Zhu. Semi-supervised learning literature survey. Technical Report 1530, Computer Sci-\nences, University of Wisconsin-Madison, 2005.\nXiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propa-\ngation. Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002.\nTable 5 details the network architecture used in all of our tests. It is heavily inspired by ConvPool-\nCNN-C (Springenberg et al., 2014) and the improvements made by Salimans & Kingma (2016). All\ndata layers were initialized following He et al. (2015), and we applied weight normalization and\nmean-only batch normalization (Salimans & Kingma, 2016) with momentum 0.999 to all of them.\nWe used leaky ReLU (Maas et al., 2013) with a = 0.1 as the non-linearity, and chose to use max\npooling instead of strided convolutions because it gave consistently better results in our experiments.\nAll networks were trained using Adam (Kingma & Ba, 2014) with a maximum learning rate of\nAmax = 0.003, except for temporal ensembling in the SVHN case where a maximum learning rate\nof Amax = 0.001 worked better. Adam momentum parameters were set to 3; = 0.9 and 82 = 0.999\nas suggested in the paper. The maximum value for the unsupervised loss component was set to\nWmaz : M/N, where M is the number of labeled inputs and N is the total number of training inputs.\nFor H-model runs, we used Wmaz = 100 in all runs except for CIFAR-100 with Tiny Images where\nwe set Wax = 300. For temporal ensembling we used wy, = 30 in most runs. For the corrupted\nlabel test in Section 3.5 we used Wmaz = 300 for 0% and 20% corruption, and wimax = 3000 for\ncorruption of 50% and higher. For basic CIFAR-100 runs we used Wmaz = 100, and for CIFAR-100\nwith Tiny Images we used Wmaz = 1000. The accumulation decay constant of temporal ensembling\nwas set to a = 0.6 in all runs.\nIn all runs we ramped up both the learning rate \\ and unsupervised loss component weight w durin;\nthe first 80 epochs using a Gaussian ramp-up curve exp[\u20145(1 \u2014 T\u2019)?], where T advances linearl\nfrom zero to one during the ramp-up period. In addition to ramp-up, we annealed the learning rat\nA to zero and Adam (3; to 0.5 during the last 50 epochs, but otherwise we did not decay then\nduring training. The ramp-down curve was similar to the ramp-up curve but time-reversed and witl\na scaling constant of 12.5 instead of 5. All networks were trained for 300 epochs with minibatcl\nsize of 100.\nTheano Development Team. Theano: A Python framework for fast computation of mathematical\nexpressions. CoRR, abs/1605.02688, May 2016.\nCIFAR-10 Following previous work in fully supervised learning, we pre-processed the images us-\ning ZCA and augmented the dataset using horizontal flips and random translations. The translations\nwere drawn from [\u20142, 2] pixels, and were independently applied to both branches in the II-model.\nSVHN_ We pre-processed the input images by biasing and scaling each input image to zero mean\nand unit variance. We used only the 73257 items in the official training set, i.e., did not use the\nprovided 531131 extra items. The training setups were otherwise similar to CIFAR-10 except that\nhorizontal flips were not used.\nModel convergence As discussed in Section 2.1, a slow ramp-up of the unsupervised cost is very\nimportant for getting the models to converge. Furthermore, in our very preliminary tests with 25(\nlabels in SVHN we noticed that optimization tended to explode during the ramp-up period, and wi\neventually found that using a lower value for Adam 2 parameter (e.g., 0.99 instead of 0.999) seem:\nto help in this regard.\nWe do not attempt to guarantee that the occurrence of labeled inputs during training would be some-\nhow stratified; with bad luck there might be several consecutive minibatches without any labelec\ninputs when the label density is very low. Some previous work has identified this as a weakness, anc\nhave solved the issue by shuffling the input sequences in such a way that stratification is guaranteed\ne.g. Rasmus et al. (2015) (confirmed from the authors). This kind of stratification might further\nimprove the convergence of our methods as well.\nTiny Images, extra data from restricted categories The restricted extra data in Section 3.3 was\nextracted from Tiny Images by picking all images with labels corresponding to the 100 categories\nused in CIFAR-100. As the Tiny Images dataset does not contain CIFAR-100 categories aquar-\nium_fish and maple_tree, we used images with labels fish and maple instead. The result was a total\nof 237 203 images that were used as unlabeled extra data. Table 6 shows the composition of this\nextra data set.\nIt is worth noting that the CIFAR-100 dataset itself is a subset of Tiny Images, and we did no\nexplicitly prevent overlap between this extra set and CIFAR-100. This led to approximately a thirc\nof the CIFAR-100 training and test images being present as unlabeled inputs in the extra set. The\nother test with 500k extra entries picked randomly out of all 79 million images had a negligible\noverlap with CIFAR-100.\n[Implementation Our implementation is written in Python using Theano (Theano\nDevelopment Team, 2016) and Lasagne (Dieleman et al., 2015), and is available at\nVNttos://aqithub.com/smlaine2?/temvnens.\nTable 6: The Tiny Images (Torralba et al., 2008) labels and image counts used in the CIFAR-10\nplus restricted extra data tests (rightmost column of Table 4). Note that the extra input images wer\nsupplied as unlabeled data for our networks, and the labels were used only for narrowing down th\nfull set of 79 million images.\nLabel # | Label # | Label # | Label #\napple 2242 | baby 2771 | bear 2242 | beaver 2116\nbed 2767 | bee 2193 | beetle 2173 | bicycle 2599\nbottle 2212 | bowl 2707 | boy 2234 | bridge 2274\nbus 3068 | butterfly 3036 | camel 2121 | can 2461\ncastle 3094 | caterpillar 2382 | cattle 2089 | chair 2552\nchimpanzee 1706 | clock 2375 | cloud 2390 | cockroach 2318\ncouch 2171 | crab 2735 | crocodile 2712 | cup 2287\ndinosaur 2045 | dolphin 2504 | elephant 2794 | fish* 3082\nflatfish 1504 | forest 2244 | fox 2684 | girl 2204\nhamster 2294 | house 2320 | kangaroo 2563 | keyboard 1948\nlamp 2242 | lawn_mower 1929 | leopard 2139 | lion 3045\nlizard 2130 | lobster 2136 | man 2248 | maple* 2149\nmotorcycle 2168 | mountain 2249 | mouse 2128 | mushroom 2390\noak_tree 1995 | orange 2650 | orchid 1902 | otter 2073\npalm_tree 2107 | pear 2120 | pickup_truck 2478 | pine_tree 2341\nplain 2198 | plate 3109 | poppy 2730 | porcupine 1900\npossum 2008 | rabbit 2408 | raccoon 2587 | ray 2564\nroad 2862 | rocket 2180 | rose 2237 | sea 2122\nseal 2159 | shark 2157 | shrew 1826 | skunk 2450\nskyscraper 2298 | snail 2369 | snake 2989 | spider 3024\nsquirrel 2374 | streetcar 1905 | sunflower 2761 | sweet_pepper 1983\ntable 3137 | tank 1897 | telephone 1889 | television 2973\ntiger 2603 | tractor 1848 | train 3020 | trout 2726\ntulip 2160 | turtle 2438 | wardrobe 2029 | whale 2597\nwillow_tree 2040 | wolf 2423 | woman 2446 | worm 2945"}]
BJuysoFeg
[{"section_index": "0", "section_name": "REVISITING BATCH NORMALIZATION FOI\nPRACTICAL DOMAIN ADAPTATION", "section_text": "Yanghao Li', Naiyan Wang\u2019, Jianping Shi\u00b0, Jiaying Liu\u2019, Xiaodi How\nyttonhao@pku.edu.cn winsty@gmail.com shijianping5000@gmail.con\niujiaying@pku.edu.cn xiaodi.hou@gmail.com\nDeep neural networks (DNN) have shown unprecedented success in various com-\nputer vision applications such as image classification and object detection. How-\never, it is still a common annoyance during the training phase, that one has to\nprepare at least thousands of labeled images to fine-tune a network to a specific\ndomain. Recent study (Tommasi et al., 2015) shows that a DNN has strong depen-\ndency towards the training dataset, and the learned features cannot be easily trans-\nferred to a different but relevant task without fine-tuning. In this paper, we propose\na simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to\nincrease the generalization ability of a DNN. By modulating the statistics from\nthe source domain to the target domain in all Batch Normalization layers across\nthe network, our approach achieves deep adaptation effect for domain adaptation\ntasks. In contrary to other deep learning domain adaptation methods, our method\ndoes not require additional components, and is parameter-free. It archives state-\nof-the-art performance despite its surprising simplicity. Furthermore, we demon-\nstrate that our method is complementary with other existing methods. Combining\nAdaBN with existing domain adaptation treatments may further improve model\nperformance."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Training a DNN for a new image recognition task is expensive. It requires a large amount of labeled\ntraining images that are not easy to obtain. One common practice is to use labeled data from other\nrelated source such as a different public dataset, or harvesting images by keywords from a search\nengine. Because 1) the distributions of the source domains (third party datasets or Internet images)\nare often different from the target domain (testing images); and 2) DNN is particularly good at\ncapturing dataset bias in its internal representation (Torralba & Efros, 2011), which eventually leads\nto overfitting. imperfectly paired training and testing sets usually leads to inferior performance.\nKnown as domain adaptation, the effort to bridge the gap between training and testing data distribu-\ntions has been discussed several times under the context of deep learning (Tzeng et al., 2014; Long\net al., 2015; Tzeng et al., 2015; Ganin & Lempitsky, 2015). To make the connection between the\ndomain of training and the domain of testing, most of these methods require additional optimiza-\ntion steps and extra parameters. Such additional computational burden could greatly complicate the\ntraining of a DNN which is already intimidating enough for most people.\nIn this paper, we propose a simple yet effective approach called AdaBN for batch normalized DN}\ndomain adaptation. We hypothesize that the label related knowledge is stored in the weight matri:\nof each layer, whereas domain related knowledge is represented by the statistics of the Batch Nor\nmalization (BN) (loffe & Szegedy, 2015) layer. Therefore, we can easily transfer the trained mode\nto a new domain by modulating the statistics in the BN layer. This approach is straightforward t\nimplement, has zero parameter to tune, and requires minimal computational resources. Moreovet\nour AdaBN is ready to be extended to more sophisticated scenarios such as multi-source domai\nadaptation and semi-supervised settings. Fig. 1 illustrates the flowchart of AdaBN. To summarize\nour contributions are as follows:"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Input\n\nOutput\nFigure 1: Illustration of the proposed method. For each convolutional or fully connected layer, we\nuse different bias/variance terms to perform batch normalization for the training domain and the test\ndomain. The domain specific normalization mitigates the domain shift issue."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Domain transfer in visual recognition tasks has gained increasing attention in recent literature (Bei-\njbom, 2012; Patel et al., 2015). Often referred to as covariate shift (Shimodaira, 2000) or datase:\nbias (Torralba & Efros, 2011), this problem poses a great challenge to the generalization ability of\na learned model. One key component of domain transfer is to model the difference between source\nand target distributions. In Khosla et al. (2012), the authors assign each dataset with an explicit bias\nvector, and train one discriminative model to handle multiple classification problems with different\nbias terms. A more explicit way to compute dataset difference is based on Maximum Mean Discrep-\nancy (MMD) (Gretton et al., 2012). This approach projects each data sample into a Reproducing\nKernel Hilbert Space, and then computes the difference of sample means. To reduce dataset discrep-\nancies, many methods are proposed, including sample selections (Huang et al., 2006; Gong et al.\n2013), explicit projection learning (Pan et al., 2011; Gopalan et al., 2011; Baktashmotlagh et al.\n2013) and principal axes alignment (Fernando et al., 2013; Gong et al., 2012; Aljundi et al., 2015).\nAll of these methods face the same challenge of constructing the domain transfer function \u2014 a high-\ndimensional non-linear function. Due to computational constraints, most of the proposed transfer\nfunctions are in the category of simple shallow projections, which are typically composed of kernel\ntransformations and linear mapping functions.\nIn the field of deep learning, feature transferability across different domains is a tantalizing ye\ngenerally unsolved topic (Yosinski et al., 2014; Tommasi et al., 2015). To transfer the learnec\nrepresentations to a new dataset, pre-training plus fine-tuning (Donahue et al., 2014) have become d\u00e9\nfacto procedures. However, adaptation by fine-tuning is far from perfect. It requires a considerable\namount of labeled data from the target domain, and non-negligible computational resources to re:\ntrain the whole network.\n1. We propose a novel domain adaptation technique called Adaptive Batch Normalization\n(AdaBN). We show that AdaBN can naturally dissociate bias and variance of a dataset\nwhich is ideal for domain adaptation tasks.\n\n2. We validate the effectiveness of our approach on standard benchmarks for both single\nsource and multi-source domain adaptation. Our method outperforms the state-of-the-art\nmethods.\n\n3. We conduct experiments on the cloud detection for remote sensing images to furthet\ndemonstrate the effectiveness of our approach in practical use.\nA series of progress has been made in DNN to facilitate domain transfer. Early works of domait\nadaptation either focus on reordering fine-tuning samples (Chopra et al., 2013), or regularizin;\nMMD (Gretton et al., 2012) in a shallow network (Ghifary et al., 2014). It is only until recently\nthat the problem is directly attacked under the setting of classification of unlabeled target domair\nusing modern convolutional neural network (CNN) architecture. DDC (Tzeng et al., 2014) used th\nclassical MMD loss to regularize the representation in the last layer of CNN. DAN (Long et al.\n2015) further extended the method to multiple kernel MMD and multiple layer adaptation. Beside:\nadapting features using MMD, RTN (Long et al., 2016) also added a gated residual layer for classi\nfier adaptation. RevGrad (Ganin & Lempitsky, 2015) devised a gradient reversal layer to compensat\nthe back-propagated gradients that are domain specific. Recently, by explicitly modeling both pri\nvate and shared components of the domain representations in the network, Bousmalis et al. (2016\nproposed a Domain Separation Network to extract better domain-invariant features.\nAnother related work is CORAL (Sun et al., 2016). This model focuses on the last layer of CNN\nCORAL whitens the data in source domain, and then re-correlates the source domain features tc\ntarget domain. This operation aligns the second order statistics of source domain and target domair\ndistributions. Surprisingly, such simple approach yields state-of-the-arts results in various text clas.\nsification and visual recognition tasks. Recently, Deep CORAL (Sun & Saenko, 2016) also extend:\nthe method into DNN by incorporating a CORAL loss."}, {"section_index": "4", "section_name": "2.1 BATCH NORMALIZATION", "section_text": "In this section, we briefly review Batch Normalization (BN) (Ioffe & Szegedy, 2015) which is\nclosely related to our AdaBN. The BN layer is originally designed to alleviate the issue of internal\ncovariate shifting \u2014 a common problem while training a very deep neural network. It first standard-\nizes each feature in a mini-batch, and then learns a common slope and bias for each mini-batch.\nFormally, given the input to a BN layer X \u20ac R\"*?, where n denotes the batch size, and p is the\nfeature dimension, BN layer transforms a feature 7 \u20ac {1...p} into:\nwhere a; and y; are the input/output scalars of one neuron response in one data sample; X.; denotes\nthe j\u2018\u201d column of the input data; and y; and 8; are parameters to be learned. This transformation\nguarantees that the input distribution of each layer remains unchanged across different mini-batches.\nFor Stochastic Gradient Descent (SGD) optimization, a stable input distribution could greatly facil-\nitate model convergence, leading to much faster training speed for CNN. Moreover, if training data\nare shuffled at each epoch, the same training sample will be applied with different transformations,\nor in other words, more comprehensively augmented throughout the training. During the testing\nphase, the global statistics of all training samples is used to normalize every mini-batch of test data."}, {"section_index": "5", "section_name": "3. THE MODEL", "section_text": "In Sec. 3.1, we first analyze the domain shift in deep neural network, and reveal two key observa-\ntions. Then in Sec. 3.2, we introduce our Adaptive Batch Normalization (AdaBN) method based or\nthese observations."}, {"section_index": "6", "section_name": "3.1 A PILOT EXPERIMENT", "section_text": "The Batch Normalization (BN) technique is originally proposed to help SGD optimization by align-\ning the distribution of training data. From this perspective, it is interesting to examine the BN\nparameters (batch-wise mean and variance) over different dataset at different layers of the network.\nx; \u2014 E[X.;]\nVar[X.;]\nUU; = Vk; + B;.\n\nUj\nExtensive experiments have shown that Batch Normalization significantly reduces the number of\niteration to converge, and improves the final performance at the same time. BN layer has become a\nstandard component in recent top-performing CNN architectures, such as deep residual network (He\net al., 2016), and Inception V3 (Szegedy et al., 2015).\nIn this pilot experiment, we use MXNet implementation (Chen et al., 2016b) of the Inception-B}\nmodel (Ioffe & Szegedy, 2015) pre-trained on ImageNet classification task (Russakovsky et al.\n2015) as our baseline DNN model. Our image data are drawn from (Bergamo & Torresani, 2010)\nwhich contains the same classes of images from both Caltech-256 dataset (Griffin et al., 2007) anc\nBing image search results. For each mini-batch sampled from one dataset, we concatenate the meat\nand variance of all neurons from one layer to form a feature vector. Using linear SVM, we cat\nalmost perfectly classify whether the mini-batch feature vector is from Caltech-256 or Bing dataset\nFig. 2 visualizes the distributions of mini-batch feature vectors from two datasets in 2D. It is clea\nthat BN statistics from different domains are separated into clusters.\noth observations motivate us to adapt the representation across different domains by BN layer\nGiven the pre-trained DNN model and a target domain, our Adaptive Batch Normalization algorithm\nis as follows!:\nAlgorithm 1 Adaptive Batch Normalization (AdaBN)\nThe intuition behind our method is straightforward: The standardization of each layer by domain\nensures that each layer receives data from a similar distribution, no matter it comes from the source\n\u2018In practice we adopt an online algorithm (Donald, 1999) to efficiently estimate the mean and variance.\n(b) Deep layer distributions\n\n(a) Shallow layer distributions\nFigure 2: t-SNE (Van der Maaten & Hinton, 2008) visualization of the mini-batch BN feature vector\ndistributions in both shallow and deep layers, across different datasets. Each point represents the\nBN statistics in one mini-batch. Red dots come from Bing domain, while the blue ones are from\nCaltech-256 domain. The size of each mini-batch is 64.\n1. Both shallow layers and deep layers of the DNN are influenced by domain shift. Domain\nadaptation by manipulating the output layer alone is not enough.\n\n2. The statistics of BN layer contain the traits of the data domain.\n(za(m)\u2014 a)\n\not\n9\n\nCompute BN output y;(m) := 7; +8\nFor K domain adaptation where kK > 2, we standardize each sample by the statistics in its owr\ndomain. During training, the statistics are calculated for every mini-batch, the only thing that we\nneed to make sure is that the samples in every mini-batch are from the same domain. For (semi\nsupervised domain adaptation, we may use the labeled data to fine-tune the weights as well. As\nresult, our method could fit in all different settings of domain adaptation with minimal effort.\nCompared with CORAL (Sun et al., 2016), one natural question is why we transform the neuron\nresponses independently, not decorrelate and then re-correlate the responses together as suggested\nin Sun et al. (2016). Under certain conditions, decorrelation could improve the performance. How-\never, in CNN, the mini-batch size is usually smaller than the feature dimension, leading to singula1\ncovariance matrices that is hard to be inversed. As a result, the covariance matrix is always sin-\ngular. In addition, decorrelation requires to compute the inverse of the covariance matrix which is\ncomputationally intensive, especially if we plan to apply AdaBN to all layers of the network."}, {"section_index": "7", "section_name": "4.1 EXPERIMENTAL SETTINGS", "section_text": "We first introduce our experiments on two standard datasets: Office (Saenko et al., 2010) ar\nCaltech-Bing (Bergamo & Torresani, 2010).\nOffice (Saenko et al., 2010) is a standard benchmark for domain adaptation, which is a collection\nof 4652 images in 31 classes from three different domains: Amazon(A), DSRL(D) and Webcam(W).\nSimilar to (Tzeng et al., 2014; Sun et al., 2016; Long et al., 2015), we evaluate the pairwise do-\nmain adaption performance of AdaBN on all six pairs of domains. For the multi-source setting, we\nevaluate our method on three transfer tasks {A, W} > D, {A, D} > W, {D, W} > A.\nCaltech-Bing (Bergamo & Torresani, 2010) is a much larger domain adaptation dataset, which con-\ntains 30,607 and 121,730 images in 256 categories from two domains Caltech-256(C) and Bing(B).\nThe images in the Bing set are collected from Bing image search engine by keyword search. Ap-\nparently Bing data contains noise, and its data distribution is dramatically different from that of\nCaltech-256.\nWe compare our approach with a variety of methods, including four shallow methods: SA (Fernand\net al., 2013), LSSA (Aljundi et al., 2015), GFK (Gong et al., 2012), CORAL (Sun et al., 2016)\nand four deep methods: DDC (Tzeng et al., 2014), DAN (Long et al., 2015), RevGrad (Ganin \u00e9\nLempitsky, 2015), Deep CORAL (Sun & Saenko, 2016). Specifically, GFK models domain shift b\nintegrating an infinite number of subspaces that characterize changes in statistical properties fron\nthe source to the target domain. SA, LSSA and CORAL align the source and target subspaces b\nexplicit feature space transformations that would map source distribution into the target one. DD\u00a2\nand DAN are deep learning based methods which maximize domain invariance by adding to AlexNe\none or several adaptation layers using MMD. RevGrad incorporates a gradient reversal layer in th\ndeep model to encourage learning domain-invariant features. Deep CORAL extends CORAL t\nperform end-to-end adaptation in DNN. It should be noted that these deep learning methods hav\nthe adaptation layers on top of the output layers of DNNs, which is a sharp contrast to our metho\nthat delves into early convolution layers as well with the help of BN layers.\nWe follow the full protocol (Donahue et al., 2014) for the single source setting; while for multiple\nsources setting, we use all the samples in the source domains as training data, and use all the samples\nin the target domain as testing data. We fine-tune the Inception-BN (Ioffe & Szegedy, 2015) model\non source domain in each task for 100 epochs. The learning rate is set to 0.01 initially, and then\nis dropped by a factor 0.1 every 40 epochs. Since the office dataset is quite small, following the\ndomain or the target domain. Although modulating statistics in one BN layer by AdaBN is a simple\ntranslation and scaling operation, such linear transformation in one layer can achieve a highly non-\nlinear transformation through the whole deep CNN architecture. Thus, we believe this AdaBN\nprocess could approximate the intrinsically non-linear domain transfer function.\nIn this section, we demonstrate the effectiveness of AdaBN on standard domain adaptation datasets.\nand empirically analyze our AdaBN model. We also evaluation our method on a practical application\nwith remote sensing images.\nTable 1: Single source domain adaptation results on Office-31 (Saenko et al., 2010) dataset witl\nstandard unsupervised adaptation protocol.\nbest practice in Long et al. (2015), we freeze the first three groups of Inception modules, and set th\nlearning rate of fourth and fifth group one tenth of the base learning rate to avoid overfitting. Fo\nCaltech-Bing dataset, we fine-tune the whole model with the same base learning rate."}, {"section_index": "8", "section_name": "4.2.1 OFFICE DATASET", "section_text": "Our results on Office dataset is reported in Table 1 and Table 2 for single/multi source(s), respec\ntively. Note that the first 5 models of the Table | are pre-trained on AlexNet (Krizhevsky et al., 2012\ninstead of the Inception-BN (loffe & Szegedy, 2015) model, due to the lack of publicly availabl\npre-trained Inception BN model in Caffe (Jia et al., 2014). Thus, the relative improvements over the\nbaseline (AlexNet/Inception BN) make more sense than the absolute numbers of each algorithm.\nFrom Table 1, we first notice that the Inception-BN indeed improves over the AlexNet on average,\nwhich means that the CNN pre-trained on ImageNet has learned general features, the improvements\non ImageNet can be transferred to new tasks. Among the methods based on Inception-BN features,\nour method improves the most over the baseline. Moreover, since our method is complementary to\nother methods, we can simply apply CORAL on the top of AdaBN. Not surprisingly, this simple\ncombination exhibits 0.5% increase in performance. This preliminary test reveals further potential\nof AdaBN if combined with other advanced domain adaptation methods. Finally, we could improve\n1.7% over the baseline, and advance the state-of-the-art results for this dataset.\nNone of the compared methods has reported their performance on multi-source domain adaptation\nTo demonstrate the capacity of AdaBN under multi-domain settings, we compare it against CORAL\nwhich is the best performing algorithm in the single source setting. The result is reported in Table 2\nWe find that simply combining two domains does not lead to better performance. The result i:\ngenerally worse compared to the best performing single domain between the two. This phenomenor\nsuggests that if we cannot properly cope with domain bias, the increase of training samples may be\nreversely affect to the testing performance. This result confirms the necessity of domain adaptation\nIn this more challenging setting, AdaBN still outperforms the baseline and CORAL on average\nAgain, when combined with CORAL, our method demonstrates further improvements. At last, ow\nmethod archives 2.3% gain over the baseline.\nTable 2: Multi-source domain adaptation results on Office-31 (Saenko et al., 2010) dataset witl\nstandard unsupervised adaptation protocol.\nMethod\n\nA>W D>~W W+D AD DA W-A Avg\n\nAlexNet (Krizhevsky et al., 2012) 61.6 95.4 99.0 63.8 51.1 49.8 70.1\nDDC (Tzeng et al., 2014) 61.8 95.0 98.5 64.4 52.1 52.2 70.6\nDAN (Long et al., 2015) 68.5 96.0 99.0 67.0 54.0 53.1 72.9\nDeep CORAL (Sun & Saenko, 2016) 66.4 95.7 99.2 66.8 52.8 51.5 72.1\nRevGrad (Ganin & Lempitsky, 2015) 73.0 96.4 99.2 - - - -\n\nInception BN (Ioffe & Szegedy, 2015) 70.3 94.3 100 70.5 60.1 57.9 75.5\nSA (Fernando et al., 2013) 69.8 95.5 99.0 71.3 59.4 56.9 75.3\nGFK (Gong et al., 2012) 66.7 97.0 99.4 70.1 58.0 56.9 74.7\nLSSA (Aljundi et al., 2015) 67.7 96.1 98.4 71.3 57.8 57.8 74.9\nCORAL (Sun et al., 2016) 70.9 95.7 99.8 71.9 59.0 60.2 76.3\nAdaBN 74.2 95.7 99.8 73.1 59.8 57.4 76.7\nAdaBN + CORAL 75.4 96.2 99.6 72.7 59.0 60.5 77.2\nexNet (Krizhevsky et al., 2012) 61.6 95.4 99.0 63.8 51.1 49.8 70.1\nIC (Tzeng et al., 2014) 61.8 95.0 98.5 64.4 52.1 52.2 70.6\n\\N (Long et al., 2015) 68.5 96.0 99.0 67.0 54.0 53.1 72.9\nep CORAL (Sun & Saenko, 2016) 66.4 95.7 99.2 66.8 52.8 51.5 72.1\n~vGrad (Ganin & Lempitsky, 2015) 73.0 96.4 99.2 - - - -\n\nception BN (Ioffe & Szegedy, 2015) 70.3 94.3 100 70.5 60.1 57.9 75.5\n\\ (Fernando et al., 2013) 69.8 95.5 99.0 71.3 59.4 56.9 75.3\n7K (Gong et al., 2012) 66.7 97.0 99.4 70.1 58.0 56.9 74.7\nSSA (Aljundi et al., 2015) 67.7 96.1 98.4 71.3 57.8 57.8 74.9\nJRAL (Sun et al., 2016) 70.9 95.7 99.8 71.9 59.0 60.2 76.3\n1aBN 74.2 95.7 99.8 73.1 59.8 57.4 76.7\nIaRN ._CORAT IEA rey uel relew 97 son An = 779"}, {"section_index": "9", "section_name": "4.2.2 CALTECH-BING DATASET", "section_text": "To further evaluate our method on the large-scale dataset, we show our results on Caltech-Bing\nDataset in Table 3. Compared with CORAL, AdaBN achieves better performance, which improves\n1.8% over the baseline. Note that all the domain adaptation methods show minor improvements ove1\nthe baseline in the task C \u2014 B. One of the hypotheses to this relatively small improvement is that\nthe images in Bing dataset are collected from Internet, which are more diverse and noisier (Bergamo\n& Torresani, 2010). Thus, it is not easy to adapt on the Bing dataset from the relatively clean dataset\nCaltech-256. Combining CORAL with our method does not offer further improvements. This might\nbe explained by the noise of the Bing dataset and the imbalance of the number of images in the two\ndomains.\nMethod C+B B+C Avg\n\nInception BN (Ioffe & Szegedy, 2015) 35.1 64.6 49.9\nCORAL (Sun et al., 2016) 35.3 67.2 513\nAdaBN 35.2 68.1 51.7\nAdaBN + CORAL 35.0 67.5 51.2\nTable 3: Single source domain adaptation results on Caltech-Bing (Bergamo & Torresani, 2010)\ndataset."}, {"section_index": "10", "section_name": "4.3 EMPIRICAL ANALYSIS", "section_text": "In this section, we investigate the influence of the number of samples in target domain to the perfor-\nmance and empirically analyze the adaptation effect of different BN layers."}, {"section_index": "11", "section_name": "4.3.1 SENSITIVITY TO TARGET DOMAIN SIZE.", "section_text": "Since the key of our method is to calculate the mean and variance of the target domain on different\nBN layers, it is very natural to ask how many target images is necessary to obtain stable statistics.\nIn this experiment, we randomly select a subset of images in target domain to calculate the statistics\nand then evaluate the performance on the whole target set. Fig. 3 illustrates the effect of using\ndifferent number of batches. The results demonstrate that our method can obtain good results when\nusing only a small part of the target examples. It should also be noted that in the extremal case of\none batch of target images, our method still achieves better results than the baseline. This is valuable\nin practical use since a large number of target images are often not available.\n20\n\n40 60\n\n(b) B>C\n\n80\n\n100\nFigure 3: Accuracy when varying the number of mini-batches used for calculating the statistics of\nBN layers in A \u2014 W and B \u2014 C, respectively. For B > C, we only show the results of using less\nthan 100 batches, since the results are very stable when adding more examples. The batch size is 64\nin this experiment. For even smaller number of examples, the performance may be not consistent\nand drop behind the baseline (e.g. 0.652 with 16 samples, 0.661 with 32 samples)."}, {"section_index": "12", "section_name": "4.3.2. ADAPTATION EFFECT FOR DIFFERENT BN LAYERS.", "section_text": "In this experiment, we analyze the effect of adapting on different BN layers with our AdaBN method.\nAccording to the structure of Inception-BN network Ioffe & Szegedy (2015), we categorize the BN\nlayers into 9 blocks: J, 2, 3a, 3b, 4a, 4b, 4c, Sa, 5b. Since the back BN layers are influenced\nby the outputs of previous BN layers, when adapting a specific block we adapted all the blocks\nbefore it. Fig. 4 illustrates the adaptation effect for different BN layers. It shows that adapting BN\nlayers consistently improves the results over the baseline method in most cases. Specifically, when\nincorporating more BN layers in the adaptation, we could achiever better transfer results."}, {"section_index": "13", "section_name": "4.4 PRACTICAL APPLICATION FOR CLOUD DETECTION IN REMOTE SENSING IMAGES", "section_text": "In this section, we further demonstrate the effectiveness of AdaBN on a practical problem: Clouc\nDetection in Remote Sensing Images. Since remote sensing images are taken by different satellite:\nwith different sensors and resolutions, the captured images are visually different in texture, color\nand value range distributions, as shown in Fig. 5. How to adapt a model trained on one satellite t\nanother satellite images is naturally a domain adaptation problem.\nOur task here is to identify cloud from the remote sensing images, which can be regarded as ;\nsemantic segmentation task. The experiment is taken under a self-collected dataset, which include:\nthree image sets, from GF2, GF1 and Tianhui satellites. Each image set contains 635, 324 and 11:\nimages with resolution over 6000x6000 pixels respectively. We name the three different dataset\nfollowing the satellite names. GF2 dataset is used as the training dataset while GF1 and Tianhu\ndatasets are for testing. We use a state-of-art semantic segmentation method (Chen et al., 2016a) a\nour baseline model.\nTable 4: Domain adaptation results (mIOU) on GFI1 and Tianhui datasets training on GF2 datasets.\nThe results on GF1 and Tianhui datasets are shown in Table 4. The relatively low results of th\nbaseline method indicate that there exists large distribution disparity among images from differen\nsatellites. Thus, the significant improvement after applying AdaBN reveals the effectiveness of ou\nmethod. Some of the visual results are shown in Fig. 6. Since other domain adaptation method\nrequire either additional optimization steps and extra components (e.g. MMD) or post-processin,\ndistribution alignment (like CORAL), it is very hard to apply these methods from image classifi\ncation to this large-size (6000x6000) segmentation problem. Comparatively, besides the effectiv\nperformance, our method needs no extra parameters and very few computations over the whol\nadaptation process.\n0.71,\n\n0.70\n\n0.65:\n\n\u2014 Adapted BN\n\u2014 Inception BN\n\n3a_3b. 4a 4b 4c 5a 5b\nAdapted layers\nFigure 4: Accuracy when adapting with different BN blocks in B \u2014 C. \u00ab = 0 corresponds to the\nresult with non-adapt method, and /, 2, 3a, 3b, 4a, 4b, 4c, 5a, 5b correspond to the nine different\nblocks in Inception-BN network..\n(a) GFI image (b) GF2 image (c) Tianhui image\na\n(a) Original image\n\n(b) Without AdaBN (c) AdaBN\nFigure 6: Visual cloud detection results on GF1 dataset. White pixels in (b) and (c) represent the\ndetected cloud regions.\nFigure 5: Remote sensing images in different domains."}, {"section_index": "14", "section_name": ") CONCLUSION AND FUTURE WORKS", "section_text": "In this paper, we have introduced a simple yet effective approach for domain adaptation on batch\nnormalized neural networks. Besides its original uses, we have exploited another functionality of\nBatch Normalization (BN) layer: domain adaptation. The main idea is to replace the statistics of\neach BN layer in source domain with those in target domain. The proposed method is easy to\nimplement and parameter-free, and it takes almost no effort to extend to multiple source domains\nand semi-supervised settings. Our method established new state-of-the-art results on both single and\nmultiple source(s) domain adaptation settings on standard benchmarks. At last, the experiments on\ncloud detection for large-size remote sensing images further demonstrate the effectiveness of our\nmethod in practical use. We believe our method opens up a new direction for domain adaptation.\nIn contrary to other methods that use Maximum Mean Discrepancy (MMD) or domain confusior\nloss to update the weights in CNN for domain adaptation, our method only modifies the statistics\nof BN layer. Therefore, our method is fully complementary to other existing deep learning basec\nmethods. It is interesting to see how these different methods can be unified under one framework."}, {"section_index": "15", "section_name": "REFERENCES", "section_text": "Rahaf Aljundi, R\u00e9mi Emonet, Damien Muselet, and Mare Sebban. Landmarks-based kernelizec\nsubspace alignment for unsupervised domain adaptation. In CVPR, 2015.\nMahsa Baktashmotlagh, Mehrtash Harandi, Brian Lovell, and Mathieu Salzmann. Unsupervisec\ndomain adaptation by domain invariant projection. In ICCV, pp. 769-776, 2013.\nAlessandro Bergamo and Lorenzo Torresani. Exploiting weakly-labeled web images to improve\nobject classification: a domain adaptation approach. In NIPS, pp. 181-189, 2010.\nKonstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan\nDomain separation networks. NJPS, 2016.\nLiang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille.\nDeeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and\nfully connected erfs. arXiv preprint arXiv: 1606.00915, 2016a.\nTiangi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu,\nChiyuan Zhang, and Zheng Zhang. MXNet: A flexible and efficient machine learning library for\nheterogeneous distributed systems. NJPS Workshop on Machine Learning Systems, 2016b.\nJeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor\nDarrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In JCML,\npp. 647-655, 2014.\nE Knuth Donald. The art of computer programming. Sorting and searching, 3:426\u2014458, 1999.\nBasura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual do\nmain adaptation using subspace alignment. In JCCV, pp. 2960-2967, 2013.\nYaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. Ir\nICML, pp. 1180-1189, 2015.\nMuhammad Ghifary, W Bastiaan Kleijn, and Mengjie Zhang. Domain adaptive neural networks fo:\nobject recognition. In PRICAI: Trends in Artificial Intelligence, pp. 898-904. 2014.\nBoqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsupervised\ndomain adaptation. In CVPR, pp. 2066-2073, 2012.\nRaghuraman Gopalan, Ruonan Li, and Rama Chellappa. Domain adaptation for object recognition\nAn unsupervised approach. In JCCV, pp. 999-1006, 2011.\nGregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. 2007.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog\nnition. CVPR, 2016.\nJiayuan Huang, Arthur Gretton, Karsten M Borgwardt, Bernhard Sch\u00e9lkopf, and Alex J Smola.\nCorrecting sample selection bias by unlabeled data. In NJPS, pp. 601-608, 2006.\nSergey loffe and Christian Szegedy. Batch normalization: Accelerating deep network training b\nreducing internal covariate shift. In JCML, pp. 448-456, 2015.\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser-\ngio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed-\nding. In ACM MM, pp. 675-678, 2014.\nAditya Khosla, Tinghui Zhou, Tomasz Malisiewicz, Alexei A Efros, and Antonio Torralba. Undoin;\nthe damage of dataset bias. In ECCV, pp. 158-171. 2012.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-\nlutional neural networks. In NJPS, pp. 1097-1105, 2012.\nMingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with\ndeep adaptation networks. In JCML, pp. 97-105, 2015.\nMingsheng Long, Jianmin Wang, and Michael I Jordan. Unsupervised domain adaptation with\nresidual transfer networks. In NJPS, 2016.\nSinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. Domain adaptation via transfer\ncomponent analysis. JEEE Transactions on Neural Networks, 22(2):199\u2014210. 2011.\nVishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adaptation:\nA survey of recent advances. JEEE Signal Processing Magazine, 32(3):53\u201469, 2015.\nKate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new\ndomains. In ECCV, pp. 213-226. 2010.\nHidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-\nlikelihood function. Journal of statistical plannine and inference. 90(7):297\u2014I44. 2000.\nBaochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation.\narXiv preprint arXiv: 1607.01719, 2016.\nChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re-\nthinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.\nTatiana Tommasi, Novi Patricia, Barbara Caputo, and Tinne Tuytelaars. A deeper look at dataset\nbias. German Conference on Pattern Recognition, 2015.\nArthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch\u00e9lkopf, and Alexander Smola.\nA kernel two-sample test. The Journal of Machine Learning Research, 13(1):723-773, 2012.\nAntonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In CVPR, pp. 1521-1528\n2011.\nEric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion\nMaximizing for domain invariance. arXiv preprint arXiv: 1412.3474, 2014.\nEric Tzeng, Judy Hoffman, Trevor Darrell, and Kate Saenko. Simultaneous deep transfer acro:\ndomains and tasks. In JCCV, pp. 4068-4076, 2015.\nLaurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine\nLearning Research, 9(2579-2605):85, 2008.\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deey\nneural networks? In NIPS, pp. 3320-3328, 2014."}]
SJJN38cge
[{"section_index": "0", "section_name": "DISTRIBUTED TRANSFER LEARNING\nFOR DEEP CONVOLUTIONAL NEURAL NETWORKS\nBY BASIC PROBABILITY ASSIGNMENT", "section_text": "Arash Shahriari\nResearch School of Engineering, Australian National University\nCommonwealth Scientific and Industrial Research Organisation\nTransfer learning is a popular practice in deep neural networks, but fine-tuning\nof a large number of parameters is a hard challenge due to the complex wiring\nof neurons between splitting layers and imbalance class distributions of original\nand transferred domains. Recent advances in evidence theory show that in an\nimbalance multiclass learning problem, optimizing of proper objective functions\nbased on contingency tables prevents biases towards high-prior classes. Transfer\nlearning usually deals with highly non-convex objectives and local minima in deep\nneural architectures. We propose a novel distributed transfer learning to tackle\nboth optimization complexity and class-imbalance problem jointly. Our solution\nimposes separated greedy regularization to each individual convolutional filter to\nmake single-filter neural networks such that the minority classes perform as the\nmajority ones. Then, basic probability assignment from evidence theory boosts\nthese distributed networks to improve the recognition performance on the target\ndomains. Our experiments on several standard datasets confirm the consistent\nimprovement as a result of our distributed transfer learning strategy."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Transfer learning for deep neural networks has been proved highly beneficial to boost their overal\nperformance. Deep learning practices usually require huge amount of labeled data to learn powerfu\nmodels. The transfer learning enables adaptation to a different source with small training samples\nOn the other hand, deep neural networks practically learn intermediate features. They could provide\nbetter transfer among domains because some of them generalize well among various domains o\n. These transferable features generally underlies several probability\n\nknowledge (\ndistributions |Oquab et al./(2014) which reduce the cross-domain discrepancy (2014)\nThe common observation among several deep architectures is that features learned in bottom layer:\nare not that specific, but transiting towards top layers makes them tailored to a dataset or task. A\nrecent study [Yosinski et al.| (2014) of the generality or specificity of deep layers for the sake of\ntransfer learning reveals two difficulties which may affect the transfer of deep features. First, tor\nlayers get quite specialized to their original tasks and second, some optimization difficulties rise\ndue to the splitting of the network between co-adapted layers. In spite of these negative effects, i"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In supervised learning, many classification algorithms assume the same distribution for training and\ntesting data. Consequently, change of distribution requires rebuilding of the statistical models which\nis not always practical because of the hardship of recollecting of training data or heavy learning\nprocess. One of the solutions is transfer learning that transfers the classification knowledge into a\nnew domain |Pan & Yang|(2010). This aims at learning of highly-generalized models with differ-\nent probability distributions across domains to learn novel domains without labeled data|Wang &\nSchneider (2014) Zhang et al. (2013). Here, the main challenge is to reduce the shifts in data dis-\ntribution between domains by algorithms that minimize the discriminant of the domains. It is worth\nmentioning that this could not get rid of domain-specific variations{Long et al.|(2016).\ns shown that transferred features not only perform better than random ones but also provide bette\ninitialization. This gives a boost to the generalization of deep neural networks as well.\nIn this paper, we propose a framework for distributed transfer learning in deep convolutional net-\nworks. This tries to alleviate the burden of splitting networks in the middle of fragile co-adapted\nlayers. The intuition is that above difficulty relates to the complexity of deep architectures and also,\nclass-imbalance in the transferred domain.\nOn the other hand, it seems that the class-imbalance problem rises form different distribution o!\ndata in original and transferred domains. This issue can be handled by cost-sensitive imbalancec\nclassifications methods. By class-imbalance in transferred domain, we mean variable coverage o!\ncommon classes in this domain and the ones from the original domain. It is probable that bott\noriginal and transferred datasets have uniform distributions of data among their classes, but some\nclasses in one domain may be fully or partly covered by the other domain. This results in imbalance\nclass distribution in the transfer learning.\nThe determination of a probabilistic distribution from the confusion matrix is highly effective t\u00ab\nproduce a probability assignment which contributes to class-imbalance problems. This basic prob\nability assignment can be either constructed from recognition, substitution and rejection rates |Xt\n) or both precision and recall rates of each class [Deng et al. (2016). The key point i\nharvesting of maximum possible prior knowledge provided by the confusion matrix to overcome the\nimbalance classification challenge.\nSince the power of deep convolutional models come from mutual optimization of all parameters,\nwe join the above distributed fine-tuned filters by a boosting scheme based on basic probability\nassignment. Our experiments confirm the functionality of our distributed strategy for deep transfer\nlearning. The rest of paper is organized as follows. We present the formulation of our method in\nSection|2] report our experiments in Section|3]and conclude in Section|4]"}, {"section_index": "3", "section_name": "2 FORMULATION", "section_text": "In general, a confusion matrix represents the class-based predictions against actual labels in form of\na square matrix. Inspired by Dempster\u2014Shafer theory, construction of basic probability assignment\n(BPA) gives a vector which is independent of the number of class sample:\nand sums up to one for each individual label. This basic probability assignment provides the abil-\nity to reflect the difference contributions of a classifier to each individual classes or combine the\n\noutcomes of multiple week classifiers."}, {"section_index": "4", "section_name": "2.1 BASIC PROBABILITY ASSIGNMENT", "section_text": "A raw two-dimensional confusion matrix indexed by predicted classes and actual labels provides\nsome common measures of classification performance. They are accuracy (the proportion of the\ntotal number of predictions that were correct), precision (a measure of the accuracy provided tha\nOn the matter of network complexity, we argue that the splitting of layers leads to a hard optimization\noroblem because of high complexity in the interconnections between neurons of co-adapted layers:\nit seems that transfer learning is not able to thoroughly reconstruct the original powerful wiring fo1\nhe transferred domain. This is due to the size of network and large number of interconnections\nacross neurons. To address this issue, we fine-tune the convolutional filters separately and hence:\neduce the complexity of the non-convex optimization.\na specific class has been predicted), recall (a measure of the ability of a prediction model to se-\nlect instances of a certain class from a dataset) and F-score (the harmonic mean of precision anc\n\nrecall)|/Sammut & Webb] (2011).\nSuppose a set of train/validation samples Y = {X1,..., Xjx|} from C = {C),...,C\\c)} differen\n\nclasses are assigned to a label set \u00a3 = {L1,..., Ljc)} by a classifier (p) such that |C| = |L|. I\n\n2ach element (nj) of the confusion matrix C(\u00a2) is considered as the number of samples belonging\n10 class C; which assigned to label L;, then we can define recall (r;;) and precision (p;;) ratios as\n\nfollows (2016)\nrij\n\nPig\nIt can be seen that the recall ratio is summed over the actual labels (rows) whilst the precision ratio\nis accumulated by the predicted classes (columns) of the confusion matrix C(). Now, we are able\nto define recall and precision matrices as\nR(9) = {rij}\nP(@) = {pis}\nforte [l...|]), 9 \u20ac[2---ICl\nThe basic probability assignments of these matrices contain recall and precision probability elements\nfor each individual class C; such that\nMrz = a\nYe ili\n\nPit\nyl\nYF =1 Pij\nThese elements are synthesized to form the final probability assignments representing the recogn\ntion ability of classifier \u00a2 to each of the classes of set C\nmr, xX mpi\n\nMm, = mr, OP mp;,\n\nmri, X mpi\nHere, operator \u00ae is an orthogonal sum which is applied by Dempster rule of combination [Senta]\nability\n\n(2002). The overall contribution of the classifier \u00a2 cab be presented as a prob\nassignment vector\nBPA(\u00a2) = {mj}\nIt is worth mentioning that BPA(\u00a2) should be computed by the train/validation set because we\nassume that the test set does not include actual labels. Besides, combination of different classes\ninder vertical or horizontal categories is a common practice in visual classification. The benefit\nlies in the fact that bottom layers of deep convolutional architectures make better contribution tc\nJetect first and second order features that are usually of specific directions (vertical vs horizontal)\n\u2018ather than detailed distinguished patterns of the objects. This leads to a powerful hierarchical\nfeature learning in the case that |C| < |L|. In contrast, some classes can be divided to various\nsub-categories although they all get the same initial labels and hence this holds |C| >> |L| to take\nhe advantage of top layers. In the above formulation, we do not merge or divide the original setur\nof the datasets under study (|C| = |\u00a3|) although it seems that our BPA-based approach is also able\n\u2018o boost the trained classifiers for each of the merge/divide scenarios.\nConventional Transfer Learning\n\nConv |. Conv | ooo | Conv\ncov [ooo] Cow | ooo] Conv\nConv Conv | ooo] Conv\n\nDistributed Transfer Learning\n\n_\nFigure 1: Conventional and Distributed Transfer Learning. The blue blocks (Conv) represent convo-\nlutional layers in the original domain, the red blocks (Softmax) show fine-tuned layers for the target\ndomain and the green block corresponds to the basic probability assignment (BPA) respectively."}, {"section_index": "5", "section_name": "2.2. DISTRIBUTED TRANSFER LEARNING", "section_text": "A general practice in transfer learning includes training of an original deep neural network on a\ndataset and then, fine-tuning of learned features for another dataset on a new target network.\n. The generality of selected features for both original and target domains is critical to\nss of the transfer learning. For implementation, we train the original network and copy\nits bottom layers to form the target network. The top layers of the target network are initialized\nrandomly and trained on the target dataset. We are able to employ backpropagation from top to\nbottom layers and fine-tune their parameters for the target task or freeze the copied originals and\nonly update top target layers. This can be decided by size of the target dataset and number of\nparameters in the original layers. Fine-tuning of large networks for small dataset leads to overfitting\nbut for small network or large dataset, performance will be improved/Sermanet et al.|(2013).\nSuppose that C; is the predicted class for a test sample T provided by classifier @. To revise\nthe classification outcome by the BPA calculation, we multiply the test sample\u2019s unary poten-\ntials U(LT) = {ui,...,ujc)} (probabilities of belonging to each class) by an assignment vector\nM(\u00a2) = {1 \u2014 m,.-.,1 \u2014 mycj} (contributions of the classifier \u00a2 to each class) and pick the\nmaximum index as the revised predicted label\nC(T) = larg max {uy x (1\u2014my),...,uje) X (1 \u2014 mye})})\nBased on our formulation for basic probability assignment (BPA) on Section [2.1] we are able to\nfollow the above transfer learning procedure by learning of a classifier \u00a2 (SVM or Softmax) and\ncomputing BPA (\u00a2) using Algorithm]!] Here, the learning means fine-tuning of target domain using\nthe rained weights and biases of the original network. To implement this, we train the original fully-\nconnected layers by the features calculated by presenting target\u2019s train set to convolutional layers of\nthe same original network. We deploy this procedure for each of the available convolutional filters\nseparately and compute the BPA of each individual single-filter network for train/validation sets.\nThen, we combine unary potentials of all the fine-tuned classifiers by employing BPA weights to\ncome up with a unit set of class probabilities. Figure [I] provides an overview of conventional and\ndistributed transfer learning processes.\nThis implies that if classifier \u00a2 performs well on class C; (high m,), it is highly probable that C(T)\nleans towards C;. At the same time, other minority classes like C; (low m,;) have a chance to win\nif their unary potentials would be high enough (wu; > uj). In contrast, if @ does poor classification\non class C; (low m,), the possibility of updating C(T\u2019) to another class (C;;) with even worse unary\npotential (u; < u;) would be higher. Therefore, BPA shows quite successful in handling imbalance\ndata distribution among classes.\nAs described in Section[]] employing probability assignment addresses the class-imbalance problem\nbut does not reduce the complexity of optimization because of the fact that both forward learning\nand error backpropagation are applied to all the model parameters. To break this non-convex op-\ntimization, we introduce our distributed transfer learning strategy. For implementation, we replace\nthe mutual learning of all the parameters with learning of each individual convolutional filter in a\nseparate classifier fed by the bottom original layer. It means that we train a set of week single-filter\nclassifiers F = {91,...,@~|} which |F| equals the number of convolutional filters in the deep\nneural architecture.we follow the recipe of single classifier in Equation|5]but extend it to redefine\nsuch that m,;; is the probability assignment of class C; to week single-filter classifier @;. To com\u00ab\nup with class of the test sample T, we update the Equation|6]as follows\n1-\u2014 . . x(l\u2014\nCr(L) = Iarg max {\u2014!~* (1= mus) pene - (1 = mets)\nDyer tay X = may) Fy wey x (1 = mies)"}, {"section_index": "6", "section_name": "3 EXPERIMENTS", "section_text": "We conduct our experiments on MNIST, CIFAR and Street View House Numbers (SVHN) datasets.\nThe MNIST dataset{/LeCun et al.|(1998) contains 60, 000 training examples and 10, 000 test samples\nnormalized to 20 x 20, centered by center of mass in 28 x 28 and sheared by horizontally shifting\nsuch that the principal axis is vertical. The foreground pixels were set to one and the background to\nzero. The CIFAR dataset[Krizhevsky & Hinton) (2009) includes two subsets. CIFAR-10 consists of\n10 classes of objects with 6,000 images per class. The classes are airplane, automobile, bird, cat.\ndeer, dog, frog, horse, ship and truck. It was divided to 5,000 randomly selected images per class\nas training set and the rest, as testing samples. The second subset is called CIFAR-100 having 600\nimages in each of 100 classes. These classes also come in 20 super-classes of five class each. The\nSVHN dataset|Netzer et al.|(2011) was extracted from a large number of Google Street View images\nby automated algorithms and the Amazon Mechanical Turk (AMT) framework. It consists of over\n600, 000 labeled characters in full numbers and MNIST-like cropped digits in 32 x 32. Three subsets\nare available containing 73, 257 digits for training, 26, 032 for testing and 531, 131 extra samples.\nWe consider two different scenarios to evaluate the performance of our distributed transfer learn-\ning algorithm. In the first experiment, we try to observe the performance of fine-tuning for pairs\nBPA(F) = {mij}\nfori\u20ac[l...|Cl, J \u20ac (Le. |All\nHere, u;; is the unary potential of class C; determined by the week single-filter classifier \u00a2;. Build-\ning on the above formulations, we are able to distribute the transfer learning among convolutional\nfilters and join them later to implement a better fine-tuning for the target deep convolutional network\naccording to the Algorithm|2]\nFigure 2: Examples of MNIST, CIFAR and SVHN Datasets\nof datasets with close data distributions or number of classes. We select MNIST & SVHN and\nCIFAR-10 & CIFAR-100 as original-target domains and report the transfer learning results in form\nof train-test errors. In the second experiment, we apply transfer learning for pairs of datasets with\nfar data/class setups which are MNIST & CIFAR-10 and SVHN & CIFAR-100. In this experiment,\nwe arrange the datasets to examine the effect of dissimilar distributions rather than overfitting.\nTable|2|shows the performance of conventional and distributed transfer learnings for the first sce.\nnario. The first values before dash correspond to the training errors (left) and the second ones presen\nthe testing errors (right).\nIn this experiment, we target two pairs of datasets (original-target domains) which contain simila\ndata and perform number/object recognition tasks. We report the results for both conventional anc\nour distributed transfer learning methods. By conventional [Bengio et al.| we mean training\nthe original dataset and fine-tuning of the target one. With distributed, we ai training the origina\ndataset but employing the basic probability assignment for the transfer learning.\nIt can be seen that the results for the conventional transfer learning follows our argument on size\nof network and number of model parameters (2013). Compared to Table[I] MNIST\ndoes a poor job on transferring of SVHN due to the overfitting of SVHN over MNIST network. In\ncontrast, SVHN perform quite well on transferring MNIST.\nBefore moving forward to discuss the experiments, we report the baseline train-test errors for the\ndatasets in Table{l] These results are produced by the deep learning library provided by the Oxford\n\nVisual Geometry Group|Vedaldi & Fulkerson] (2008).\nTable 1: Baseline Performances of Deep Learning\nTrain Error (%) Test Error (%)\n\nMNIST 0.04 0.55\nSVHN 0.13 3.81\nCIFAR-10 0.01 19.40\n\nCIFAR-100 0.17 50.90\nTable 2: Performance of Conventional and Distributed Transfer Learning for Experiment\nOn the other hand, transferring of SVHN from MNIST does not overfit when our distributed transfer\nlearning is employed. In both settings of original-target domains, our distributed strategy outper-\nforms the conventional transfer learning approach.\nThe experiment on CIFAR pair exposes more interesting results due to the fact that both datasets\nhave the same number of samples but completely different distributions among the classes. In prac-\ntice, CIFAR-100 includes all the classes of CIFAR-10 but CIFAR-10 does not have any clue of the\nseveral classes of CIFAR-100. The conventional experiments show that CIFAR-10 transfers well on\nCIFAR-100 but it cannot perform transferring although the target network does not overfit.\nAll in all, the performance of our distributed transfer learning (bold values) is better than the con-\nventional scheme and also, outperforms the baseline deep learning practices."}, {"section_index": "7", "section_name": "3.2 EXPERIMENT 2", "section_text": "For the first setup, CIFAR-10 does a better transfer learning than MNSIT although the number of\nclasses are the same. It seems that CIFAR-10 provides better generalization due to higher diversity\namong its classes. Here, our distributed algorithm performs better than the conventional process and,\nOriginal\n\nOriginal\n\nlarget\n\nConventional MNIST SVHN\nz MNIST - 0.01 \u2014 29.57\n\"bp\nSVHN 0.35 \u2014 1.04 -\nis)\n\nTarget\n\nDistributed MNIST SVHN\n2 MNIST - 0.24 \u2014 5.18\nSb\n& SVHN _ 0.16 \u2014 0.46 -\n\nTarget\n\nConventional | CIFAR-10 CIFAR-100\n\nCIFAR-10 - 0.53 \u2014 68.44\n\nCIFAR-100 = 0.11 \u2014 24.08\n\nTarget\n\nDistributed CIFAR-10 CIFAR-100\n\nCIFAR-10 - 0.29 \u2014 54.32\n\nCIFAR-100 0.05 \u2014 18.24\nIn Table [3] we reports the results for both conventional and distributed transfer learnings on the\nsecond scenario. Here, we pair datasets such that the similarity of their data distributions and number\n\nof classes get minimized and they are originally trained for different tasks. It is obvious that our\ndistributed transfer learning outperforms all the conventional results.\nTable 3: Performance of Conventional and Distributed Transfer Learning for Experiment \u2018\ntargeting of MNIST on CIFAR-10 network gives close performance to the deep learning outcomes.\nThe second setup leads to the overfitting of SVHN over CIFAR-100 network due to huge number\nof samples. The other outcome is the poor performance of transferring CIFAR-100 over SVHN\nnetwork as a result of huge conceptual gap between original-target domains.\nOur observations show that fine-tuning on training set and calculating BPA on validation, result ir\nbetter generalization of the transferred model on testing set. On the other hand, computing of BPA or\ntraining plus validation sets gives higher performance in case of hugely different number of classes\nin original-target datasets. Since we employ BPA to address the class-imbalance problem, we reckor\nthat it better captures the distribution of data by adjoining both train/validation sets especially wher\nwe intend to transfer few classes of original dataset to the larger number of classes in the target."}, {"section_index": "8", "section_name": "4 CONCLUSION", "section_text": "We introduce a novel transfer learning for deep convolutional networks that tackles the optimization\ncomplexity of a highly non-convex objective by breaking it to several distributed fine-tuning oper-\nations. This also resolves the imbalance class coverage between original-target domains by using\nbasic probability assignment across several week single-filter classifiers. By the above boosting, the\noverall performance shows considerable improvement over conventional transfer learning scheme.\nWe conduct several experiments on publicly available datasets and report the performance as train-\ntest errors. The results confirm the advantage of our distributed strategy for the transfer learning.\nOriginal\n\nOriginal\n\nOriginal\n\nOriginal\n\nTarget\n\nConventional MNIST CIFAR-10\nMNIST - 0.43 \u2014 28.92\nCIFAR-10 0.44 \u2014 2.37 -\nTarget\nDistributed MNIST CIFAR-10\nMNIST - 0.25 \u2014 20.85\nCIFAR-10 0.23 \u2014 0.95 -\nTarget\nConventional SVHN CIFAR-100\nSVHN - 0.71 \u2014 89.31\nCIFAR-100 0.01 \u2014 12.18 -\nTarget\nDistributed SVHN CIFAR-100\nSVHN - 0.46 \u2014 61.10\n\nCIFAR-100 0.28 \u2014 7.25"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nMingsheng Long, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation\nnetworks. arXiv preprint arXiv: 1605.06636, 2016.\nYuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading\ndigits in natural images with unsupervised feature learning. 2011.\nKari Sentz and Scott Ferson. Combination of evidence in Dempster-Shafer theory, volume 4015.\nCiteseer, 2002.\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deey\nneural networks? In Advances in neural information processing systems, pp. 3320-3328, 2014.\nKun Zhang, Bernhard Sch\u00e9lkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation under\ntarget and conditional shift. In JCML (3), pp. 819-827, 2013.\nYann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc\ndocument recognition. Proceedings of the IEEE, 86(11):2278\u20142324. 1998.\nClaude Sammut and Geoffrey I Webb. Encyclopedia of machine learning. Springer Science &\nBusiness Media, 2011.\nXuezhi Wang and Jeff Schneider. Flexible transfer learning under support and model shift. In\nAdvances in Neural Information Processing Systems. pp. 1898-1906. 2014."}]
Hkg8bDqee
[{"section_index": "0", "section_name": "INTROSPECTION: ACCELERATING NEURAL NETWORK\nTRAINING BY LEARNING WEIGHT EVOLUTION", "section_text": "Abhishek Sinha\u2019\nDepartment of Electronics and Electrical Comm. Engg.\nIIT Kharagpur\nWect Renoal India\nahitagnimukherjeeam at gmail dot com\nNeural Networks are function approximators that have achieved state-of-the-ar\naccuracy in numerous machine learning tasks. In spite of their great succes:\nin terms of accuracy, their large training time makes it difficult to use them fot\nvarious tasks. In this paper, we explore the idea of learning weight evolutior\npattern from a simple network for accelerating training of novel neural networks.\nWe use a neural network to learn the training pattern from MNIST classifi-\ncation and utilize it to accelerate training of neural networks used for CIFAR-10\nand ImageNet classification. Our method has a low memory footprint and is\ncomputationally efficient. This method can also be used with other optimizers\nto give faster convergence. The results indicate a general trend in the weight\nevolution during training of neural networks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks have been very successful in modeling high-level abstractions in data. How\never, training a deep neural network for any AI task is a time-consuming process. This is because\nlarge number of parameters need to be learnt using training examples. Most of the deeper network:\ncan take days to get trained even on GPU thus making it a major bottleneck in the large-scale appli\ncation of deep networks. Reduction of training time through an efficient optimizer is essential fo\nfast design and testing of deep neural nets.\nIn the context of neural networks, an optimization algorithm iteratively updates the parameter:\n(weights) of a network based on a batch of training examples, to minimize an objective function\nThe most widely used optimization algorithm is Stochastic Gradient Descent. Even with the adven\nof newer and faster optimization algorithms like Adagrad, Adadelta, RMSProp and Adam there is\nstill a need for achieving faster convergence.\nIn this work we apply neural network to predict weights of other in-training neural networks to\naccelerate their convergence. Our method has a very low memory footprint and is computationally\nefficient. Another aspect of this method is that we can update the weights of all the layers in parallel.\n\u201cThis work was done as part of an internship at Adobe Systems, Noida\nMausoom Sarkar\nAdobe Systems Inc, Noida\nUttar Pradesh, India\nkbalaji at adobe dot com"}, {"section_index": "2", "section_name": "2 RELATED WORK", "section_text": "Several extensions of Stochastic Gradient Descent have been proposed for faster training of neur:\n\nnetworks. Some of them are Momentum (Rumelhart et al-]/1986), AdaGrad (Duchy et al]\nAdaDelta (Zeiler| 2012), RMSProp (Hinton et al J[2012) and Adam (Kingma & Baj 2014). AIT\nthem reduce the convergence time by suitably altering the learning rate during training. Our metho\ncan be used along with any of the above-mentioned methods to further improve convergence time.\nIn the above approaches, the weight update is always a product of the gradient and the modi\nfied/unmodified learning rate. More recent approaches (Andrychowicz et al. 2016) have tried t\nlearn the function that takes as input the gradient and outputs the appropriate weight update. Thi\nexhibited a faster convergence compared to a simpler multiplication operation between the learnin;\nrate and gradient. Our approach is different from this, because our forecasting Network does no\nuse the current gradient for weight update, but rather uses the weight history to predict its future\nvalue many time steps ahead where network would exhibit better convergence. Our approacl\ngeneralizes better between different architectures and datasets without additional retraining. Furthe\nour approach has far lesser memory footprint as compared to\napproach need not be involved at every weight update and hence can be invoked\nwhich makes it computationally efficient.\nAnother recent approach, called Q-gradient descent ( ), uses a reinforcement learning\nframework to tune the hyperparameters of the optimization algorithm as the training progresses\nThe Deep-Q Network used for tuning the hyperparameters itself needs to be trained with data fron\nany specific network JN to be able to optimize the training of N. Our approach is different becauss\nwe use a pre-trained forecasting Network that can optimize any network N without training itsel\nby data from N.\nFinally the recent approach by (Jaderberg et al.}|2016) to predict synthetic gradients is similar to ow\nwork, in the sense that the weights are updates independently, but it still relies on an estimation o:\nthe gradient, while our update method does not.\nOur method is distinct from all the above approaches because it uses information obtained from tl\nraining process of existing neural nets to accelerate the training of novel neural nets.\nThe evolution of weights of neural networks being trained on different classification tasks such a\non MNIST and CIFAR-10 datasets and over different network architectures (weights from differen\nlayers of fully connected as well as convolutional architectures) as well as different optimizatiot\nrules were analyzed. It was observed that the evolution followed a general trend independent of thi\ntask the model was performing or the layer to which the parameters belonged to. A major proportiot\n\nof the weights did not undergo any significant change. Two metrics were used to quantify weigh\nchanges:\ne Difference between the final and initial values of a weight scalar: This is a measure of how\nmuch a weight scalar has deviated from its initial value after training.In figure[4] we show\nthe frequency histogram plot of the weight changes in a convolutional network trained for\nMNIST image classification task, which indicates that most of the weight values do not\nundergo a significant change in magnitude. Similar plots for a fully connected network\ntrained on MNIST dataset ( figure [6]) and a convolutional network trained on CIFAR-10\ndataset (figure[8}) present similar observations.\n\ne Square root of 2nd moment of the values a weight scalar takes during training: Through\nthis measure we wish to quantify the oscillation of weight values. This moment has been\ntaken about the initial value of the weight. In figure [5] we show the frequency histogram\nplot of the second moment of weight changes in a convolutional network trained for the\nMNIST digit classification task, which indicates that most of the weight values do not\nundergo a significant oscillations in value during the training. Similar plots for a fully\nA very small subset of the all the weights undergo massive changes compared to the rest\nDifference of weight value from initialized value\n\n0.015,\n\n0.010\n\n0.005\n\n0.000\n\n0.005]\n\n-0.010!\n0\n\n10000 20000 30000 \u201840000 50000\nTraining steps\n0.015,\n\n0.010\n\ng\ns\n&\n\n0.005\n\n0.000\n\n0.005]\n\nDifference of weight value from\n\n-0.010!\n0 10000 20000 30000 40000 50000\n\nTraining steps\nFigure 1: Deviation of weight values from initialized values as a convolutional network gets trainec\non MNIST dataset using SGD optimizer.\nDeviation of weight value from initialization with training\nfully connected network on MNIST\n\n0.8,\n\n0.10, \u2014\n\n0.05\n\n0.00}\n\n0.08}\n\nDifference of weight value from initialized value\n\n0 20000 40000 0000 0000 00000 0201\n\u2018Training steps 0\nDifference of weight value from initialized value\n\n0.8,\n\nfully connected network on MNIST\n\n0 20000 40000 0000 0000 00000\n\nTraining steps\n\nDifference of weight value from ini\n\nDeviation of weight values from initialized values\n\n020, when training a convolutional network on CIFAR-10\n\n0.05\n\no.00fsc\n\n0.08}\n\n0.10}\n\n-0.15|\n\n-\u00b0 20, 10000 20000 30000 40000 50000\n\nTraining steps\nFigure 2: Deviation of weight values from\ninitialized values as a fully-connected net-\nwork gets trained on MNIST dataset using\nAdam optimizer.."}, {"section_index": "3", "section_name": "3.1 WEIGHT PREDICTION", "section_text": "We collect the weight evolution trends of a network that is being trained and use the collected data\nto train a neural network J to forecast the future values of each weight based on its values in the\nprevious time steps. The trained network J is then used to predict the weight values of an unseen\nnetwork N during its training which move N to a state that enables a faster convergence. The\ntime taken for the forecast is significantly smaller compared to the time a standard optimizer (e.g.\nSGD) would have taken to achieve the same accuracy. This leads to a reduction in the total training"}, {"section_index": "4", "section_name": "connected network trained on MNIST (figure [7]) and a convolutional network trained ot\nCIFAR-10 ( figure|9) dataset present similar observations.", "section_text": "The few that did change significantly were observed to be following a predictable trend, where\nthey would keep on increasing or decreasing with the progress of training in a predictable fashion.\nIn figures [I] [2] and [3] we show the evolution history of a few weights randomly sampled from the\nweight change histogram bins of figures[4]6]and|8]respectively, which illustrates our observation.\nMNIST\n\nDifference of weight value from initialized value\n\n0.10,\n\nDeviation of weight values from initialized values\nwhen training a convolutional network on CIFAR-10\n\n0.05\n\n0.00}\n\n0.08}\n\nT0000 20000 30000 40000 5\n\u2018Training steps\nFigure 3: Deviation of weight values from\ninitialized values as CNN gets trained on\nCIFAR-10 dataset using SGD optimizer.\nlog-Frequency Distribution of\ndeviation of weight value from initialization\n\n10!\n\nFrequency\nFigure 4: log-Frequency distribution of dif-\nference between weight values before and\nafter training for a network No trained on\nMNIST dataset using SGD optimizer.\nThe forecasting network I is a simple 1-layered feedforward neuralnet. The input layer consists 0:\nfour neurons that take four samples from the training history of a weight. The hidden layer consist:\nof 40 neurons, fully connected to the input layer, with ReLU activation. The output layer is a single\nneuron that outputs the predicted future value of the weight. In our experiments four was minimun\nnumbers of samples for which the training of Introspection Network J converged.\nThe figure [TO] below shows a comparison of the weight evolution for a single scalar weight value\nwith and without using the introspection network J. The vertical green bars indicate the points at\nwhich the introspection network was used to predict the future values. Post prediction, the network\ncontinues to get trained normally by SGD, until the introspection network J is used once again to\njump to a new weight value."}, {"section_index": "5", "section_name": "4.1 TRAINING OF INTROSPECTION NETWORK", "section_text": "The introspection network J is trained on the training history of the weights of a network No which\nwas trained on MNIST dataset.The network No consisted of 3 convolutional layers and two fully\nconnected layers, with ReLU activation and deploying Adam optimiser. Max pooling(2X2 pool\nsize and a 2X2 stride) was applied after the conv layers along with dropout applied after the first fc\nlayer. The shapes of the conv layer filters were [5, 5, 1,8] , [5,5,8, 16] and [5, 5, 16, 32] respectively\nwhereas of the fc layer weight were [512, 1024] and [1024, 10] respectively.The network No was\ntrained with a learning rate of le \u2014 4 and batch size of 50. The training set of J is prepared as\nfollows. A random training step t is selected for each weight of No selected as a training sample\nand the following 4 values are given as inputs for training J:\n. value of the weight at step t\n\nvalue of the weight at step 7t/10\nvalue of the weight at step 4t/10\n\nat step 0 (Le. the initialized value)\nlog-Frequency Distribution of\n\ns square root of 2nd moment about initialized value of weights\n\n10\u00b0\n\n10'\n\nFrequency\n\nil\n\nI\n\n0.00\n\n0.05\n\n0.10 0.15 0.20 0.25 0.30\nSquare root of 2nd moment about initialized value\n\n0.35\n\n040\nFigure 5: log-Frequency distribution of\nsquare root of 2nd moment of a weight\nvalue(about initial value) along its training\nhistory. The weight values are taken from a\nnetwork No trained on MNIST dataset using\nSGD optimizer.\ntime. The predictor J that is used for forecasting weights is a comparatively smaller neural network,\nwhose inference time is negligible compared to the training time of the network that needs to be\ntrained(N). We call this predictor J Introspection network because it looks at the weight evolution\nduring training.\n10\n\n108\n\nFrequency\n\n10\n\nhd\n\n10\u00b0\n\n0.2\n\n0.4\n\n0.6\n\n0.8\n\n1.0\n\n2\nFigure 6: log-Frequency distribution of dif-\nference between weight values before and\nafter training for a fully-connected network\ntrained on MNIST dataset using Adam opti-\nmizer.\nlog-Frequency Distribution of\ndeviation of weight value from initialization\n\n10!\n\nFrequency\nFigure 8: log-Frequency distribution of dif-\nference between weight values before and af-\nter training for a CNN trained on CIFAR-10\ndataset using SGD optimizer.\nSince a large proportion of weights remain nearly constant throughout the training, a preprocessing\nstep is done before getting the training data for I. The large number of weight histories collected are\nsorted in decreasing order on the basis of their variations in values from time step 0 to time step t.\nWe choose 50% of the training data from the top 50th percentile of the sorted weights, 25% from\nthe next 25th percentile(between 50 to 75th percentile of the sorted weights) and the remaining 25%\nfrom the rest (75th to 100th percentile). Approximately 0.8 million examples of weight history are\nused to train J. As the weight values are very small fractions they are further multiplied by 1000\nbefore being input to the network I. The expected output of J, which is used for training I using\nbackpropagation, is a single scalar the value of the same weight at step 2t. This is an empirical\nchoice. For example, any step kt with k > 1 can be chosen instead of 2t. In our experiments\nwith varying the value of k, we found that the value of k = 2.2 reached a slightly better validation\naccuracy than k: = 2.0 on MNIST dataset (see figure[15]) but, on the whole the value of k = 2.0 was\na lot more consistent in its out-performance at various points in its history. All the results reported\nhere are with respect to the J trained to predict weight values at 2\u00a2.\n10\n\n10\n\nFrequency\n\n10)\n\n10'\nFigure 7: log-Frequency distribution of\nsquare root of 2nd moment of a weight\nvalue(about initial value) along its training\nhistory. The weight values are taken from a\nfully-connected network trained on MNIST\ndataset using Adam Optimizer.\nlog-Frequency Distribution of\nsquare root of 2nd moment about initialized value of weights\n\n10!\n\nFrequency\n\n0.2 0.4 06 08 io Tz\n\n\u2018Square root of 2nd moment about initialized value\nFigure 9: log-Frequency distribution of\nsquare root of 2nd moment of a weight\nvalue(about initial value) along its training\nhistory. The weight values are taken from\na CNN trained on CIFAR-10 dataset using\nSGD Optimizer.\nWeight values\n\nEvolution of weights with and without Introspection network\n\n\u2018\u2014 sep\n\u2014\u2014 SGD + update using Introspection network\n\n|\n\n5000\n\n10000 15000 20000\nTraining Steps\nFigure 10: Example of weight update using Introspection Network.\nAdam optimizer was used for the training of the introspection network with a mini-batch size of\n20.The training was carried out for 30k steps. The learning rate used was 5e-4 which decreased\ngradually after every 8k training steps. L1- error was used as the loss function for training . We\nexperimented with both L2 error and percentage error but found that L1 error gave the best result\nover the validation set. The final training loss obtained was 3.1 and the validation loss of the final\ntrained model was 3.4. These correspond to average L1 weight prediction error of 0.0031 and 0.0034\nin the training and validation set respectively as the weight values are multiplied by 1000 before they\nare input to J.\nThe introspection network once trained can be then used to guide the training of other networks. W\nillustrate our method by using it to accelerate the training of several deep neural nets with varyin;\narchitectures on 3 different datasets, namely MNIST, CIFAR-10 and ImageNet. We note that th\nsame introspection network J, trained on the weight evolutions of the MNIST network No was use:\nin all these different cases.\nAll the networks trained using I required comparatively less time to reach the same accuracy as\nnormal SGD training. Also, when the same network was trained for the same time with and without\nupdates by J, the former is observed to have better accuracy. These results show that there is <\nremarkable similarity in the weight evolution trajectories across network architectures,tasks and\ndatasets.\nFour different neural networks were trained using J on MNIST dataset:\nAll the networks have been trained using either Stochastic Gradient Descent, or ADAM and the\nnetwork J is used at a few intermediate steps to propel the network to a state with higher accuracy.We\nrefer to the time step at which the introspection network J is applied to update all the weights as a\n\"jump point\u201d.\nThe selection of the steps at which I is to be used is dependent on the distribution of the training\nstep t used for training J. We show the effect of varying the timing of the initial jump and the time\ninterval between jump points in section|4.2.2| It has been observed that J gives a better increase in\naccuracy when it is used in later training steps rather than in the earlier ones.\n1. A convolutional neural network M/NJST with 2 convolutional layer and 2 fully con-\nnected layers(dropout layer after 1st fc layer is also present)with ReLU acitvations for\nA comparison of the validation accuracy with and without updates by J is shown in figures [TT] [12]\n[land [14] The green lines indicate the steps at which the introspection network J is used. For the\n{| network with the application of the introspection network J at three points, we found that\nit took 351 seconds and 20000 SGD steps to reach a validation accuracy of 98.22%. In the same\nnumber of SGD steps, normal training was able to reach a validation accuracy of only 97.22%. In\nthe same amount of time (251 seconds), normal training only reached 97.92%. Hence the gain in\naccuracy with the application of introspection network translates to real gains in training times.\nFor the MNIST2 network, the figure [12|shows that to reach an accuracy of 99.11%, the numbe1\nof iterations required by normal SGD was 6000, whereas with the application of the introspectior\nnetwork J, the number of iterations needed was only 3500, which represents a significant savings ir\ntime and computational effort.\nFigure 11: Validation accuracy plot for\nMNIST,\nThe initial drop in accuracy seen after a jump in M NIST\u00bb figure[I2|can be attributed to the fact tha\neach weight scalar is predicted independently, and the interrelationship between the weight scalar:\nin a layer or across different layers is not taken into consideration. This interrelationship is soor\nreestablished after few SGD steps. This phenomenon is noticed in the CIFAR and ImageNet case:\ntoo.\nclassification task on MNIST image dataset.Max pooling(2X2 pool size and a 2X2 stride)\nwas applied after every conv layer. The CNN layer weights were of shape [5,5, 1,8] and\n[5, 5, 32, 64] respectively and the fc layer were of sizes [3136, 1024] and [1024, 10].The\nweights were initialised from a truncated normal distribution with a mean of 0 and std of\n0.01. The network was trained using SGD with a learning rate of le\u20142 and batch size of 50.\nIt takes approximately 20,000 steps for convergence via SGD optimiser. For MNIST,, I\nwas used to update all weights at training step 3000, 4000, and 5000.\n\n. Aconvolutional network NIST\u00bb with 2 convolutional layer and 2 fully connected layers\nwith ReLU acitvations. Max pooling(2X2 pool size and a 2X2 stride) was applied after ev-\nery conv layer. The two fc layer were of sizes [800, 500] and [500, 10] whereas the two conv\nlayers were of shape [5,5, 1,20] and [5,5, 20,50] respectively. The weight initialisations\nwere done via xavier intialisation. The initial learning rate was 0.01 which was decayed\nvia the inv policy with gamma and power being le \u2014 4 and 0.75 respectively. Batch size\nof 64 was used for the training.It takes approximately 10,000 steps for convergence . The\nnetwork J was used to update weights at training step 2500 and 3000.\n\n. A fully connected network MJ NIST; with 2 hidden layers each consisting of 256 hidden\nunits and having ReLU acitvations. The network was trained using SGD with a learning\nrate of 5e \u2014 3 and a batch size of 100. The initial weights were drawn out from a normal\ndistribution having mean 0 and std as 1.0. For this network the weight updations were\ncarried out at steps 6000, 8000 and 10000.\n\n. ARNN MNIST; used to classify MNIST having a LSTM cell of hidden size of 128\nfollowed by a fc layer of shape [128, 10] for classification. The RNN was trained on Adam\noptimizer with a learning rate of 5e \u2014 4 and a batch size of 128. The weight updations for\nthis network were done at steps 2000,3000 and 4000. Since the LSTM cell uses sigmoid\nand tanh activations, the RNN 1 NJST; allows us to explore if the introspection network,\ntrained on ReLU can generalize to networks using different activation functions.\nFigure 12: Validation accuracy plot for\nMNIST>\nFigure 13: Validation accuracy plot for\nMNIST3\nFor MNIST3 after 15000 steps of training,the max accuracy achieved by normal training of net-\nwork via Adam optimizer was 95.71% whereas with introspection network applied the max accuracy\n\nwas 96.89%. To reach the max accuracy reached by normal training , the modified network(weights\nupdated by J) took only 8300 steps.\nFor M NIST, after 7000 steps of training, the max accuracy achieved by normal training of network\nwas 98.65% achieved after 6500 steps whereas after modification by J it was 98.85% achieved afte:\n5300 steps. The modified network(weights updated by J) reached the max accuracy achieved by\nnormal network after only 4200 steps. It is notable that the introspection network J trained or\nweight evolutions with ReLU activations was able to help accelerate the convergence of an RNN\nnetwork which uses sigmoid and tanh activations.\nTest accuracy\n\n0.996\n\n0.982\n\n099\n\n0.98\n\nA\n\n2\n\n04\n\ncr\n\ncry\n\n12\ntraining stops\n0.994\n\n02 |.\n\noe\n\nogee L\n\nToateccacy\n8\n\nA\n\n0.982\n\noe\n\n97a\n\nL L L L L\n04 cr cry T 12 14 16\ntraining stops\nFigure 15: Comparison of introspection networks trained with different jump ratios on M NIST:\nnetwork with Adam optimizerJump of 2.0 has a more consistent out performance compared to\njump value of 2.2 even though it reaches a slightly higher accuracy\nFigure 14: Validation accuracy plot fot\nM NIST, Which is an RNN\nWe applied our introspection network J ona CNN CIF AR; for classifying images in the CIFAR1(\nKrizhevsky] |2009) dataset. It has 2 convolutional layers, 2 fully connected layer and a final soft:\nmax layer with ReLU activation function. Max pooling (3X3 pool size and a 2X2 stride) and batch\n10rmalization has been applied after each convolutional layer. The two conv layer filter weights\nwere of shape [5, 5, 3, 64] and [5, 5, 64, 64] respectively whereas the two fc layers and final softmax\nlayer were of shape [2304, 384],[384, 192] and [192, 10] respectively. The weights were initialized\nfrom a zero mean normal distribution with std of le \u2014 4 for conv layers,0.04 for the two fc layers\nind 1/192.0 for the final layer. The initial learning rate used is 0.1 which is decayed by a factor ot\n).1 after every 350 epochs. Batch size of 128 was used for training of the model which was trained\nvia the SGD optimizer. It takes approximately 40.000 steps for convergence. The experiments on\nCIF AR, were done to investigate two issues. The first was to investigate if the introspection net-\nwork trained on MNIST weight evolutions is able to generalize to a different network and different\ndataset. The second was to investigate the effect of varying the timing of the initial jump, the inter-\nval between successive jumps and the number of jumps. To investigate these issues, four separate\ntraining instances were performed with 4 different set of jump points:\nRen Pr\n\n: Weight updates were carried out at training steps 12000 and 17(\n: Weight updates at steps 15000 and 18000 .\n\n: Weight updates at steps 12000 , 15000 and 19000 .\n\n: Weight updates at steps 14000 . 17000 and 20000 .\nWe observed that for the CI F'AR, network that in order to reach a validation accuracy of 85.7%,\nwe need 40,000 iterations with normal SGD without any intervention with the introspection network\nI. In all the four sets where the introspection network was used, the target accuracy of 85.7%\nwas reached in approximately 28,000 steps. This shows that the introspection network is able to\n\nsuccessfully generalize to a new dataset and new architecture and show significant gains in training\ntime.\nOn CIF AR;, the time taken by J for prediction is negligible compared to the time required for\nSGD. So the training times in the above cases on CIF'AR can be assumed to be proportional tc\nthe number of SGD steps required.\nA comparison of the validation accuracy with and without updates by J at the four different sets of\njump points are shown in figures[16] [17] [T8]and[19] The results show that the while choice of jump\npoints have some effect on the final result, the effects are not very huge. In general, we notice that\nbetter accuracy is reached when the jumps take place in later training steps.\nFigure 16: Validation accuracy plot for\nCIF AR, with jumps at Set,\nFigure 18: Validation accuracy plot for\nCIF AR, with jumps at Sets\nr rnin vrs A ai :\n\u201c ds ieee reer a Wi\n\u201c ; att weve i\naly\n\naha\n\\\n\nvn\nAh\nagi\n\nvin Ay A\nFigure 17: Validation accuracy plot for\nCIF AR, with jumps at Set\nie hyper\nFigure 19: Validation accuracy plot for\nCIF AR, with jumps at Set,\nTo investigate the practical feasibility and generalization ability of our introspection network, we\napplied it in training AlexNet(Krizhevsky et al.|[2012) (Alex Net ) on the ImageNet (Russakovsky|\ndataset. It has 5 conv layers and 3 fully connected layers . Max pooling and loca\nresponse normalization have been used after the two starting conv layers and the pooling layer i\nthere after the fifth conv layer as well. We use SGD with momentum of 0.9 to train this network\nstarting from a learning rate of 0.01. The learning rate was decreased by one tenth every 100, 00(\niterations. The mini-batch size was 128. It takes approximately 300,000 steps for convergence. Th\nweight updates were carried out at training steps 120,000 , 130.000 , 144, 000 and 160,000.\nWe find that in order to achieve a top-5 accuracy of 72%, the number of iterations required in the\nnormal case was 196,000. When the introspection network was used, number of iterations required\nto reach the same accuracy was 179,000. Again the time taken by J for prediction is negligible\ncompared to the time required for SGD. A comparison of the validation accuracy with and without\nupdates by J is shown in figure [20] The green lines indicate the steps at which the introspection\nnetwork J is used. The corresponding plot of loss function against training steps has been shown in\n\nfigure/2T]\nPlot of accuracy vs training steps for imageNet\n\nose\n\nAccuracy\n\noss\n\n062\n\namare\nIrvospecion network applied\n\nee aN\n\ncry\n\nrr |\nTraining steps 08\nFigure 20: Validation accuracy plot for\nAlex Net, on ImageNet\nThe results on Aleanet, show that our approach has a small memory footprint and computationally\nefficient to be able to scale to training practical large scale networks.\nIn this section we provide a comparison with other optimizers and simple heuristics which can be\nused to update the weights at different training steps instead of updations by introspection network."}, {"section_index": "6", "section_name": "4.4 COMPARISON WITH ADAM OPTIMIZER", "section_text": "We applied the introspection network on M NIST, and MNIST3 networks being trained witl\nAdam optimizer with learning rates of le \u2014 4 and le \u2014 3. The results in figure [22] and figure\n[23] show that while Adam outperforms normal SGD and SGD with introspection, we were able t\nsuccessfully apply the introspection network on Adam optimizer and accelerate it.\nFor MNIST, the max accuracy achieved by Adam with introspection was 99.34%, by normal\nAdam was 99.3%, by SGD with introspection was 99.21% and by normal SGD was 99.08% . With\nintrospection applied on Adam the model reaches the max accuracy as achieved by normal Adam\nafter only 7200 steps whereas the normal training required 10000 steps.\nFor M NIS'T3 the max accuracy achieved by Adam with introspection was 96.9%, by normal Adarr\nwas 95.7%, by SGD with introspection was 94.47% and by normal SGD was 93.39% . With intro-\nspection applied on Adam the model reaches the max accuracy as achieved by normal Adam afte:\nonly 8800 steps whereas the normal training required 15000 steps.\nFigure 21: Plot of loss function vs training\nsteps for Alex Net, on ImageNet\nFigure 22: Test accuracy comparison for\nMNIST, for SGD and Adam optimiser in\nthe presence and absence of introspection.\nA separate quadratic curve was fit to each of the weight values of the model on the basis of the\n4 past weight values chosen from history.The weight values chosen from history were at the same\nsteps as they were for updations by J. The new updated weight would be the value of the quadratic\ncurve at some future time step.For NIST, , experiments were performed by updating the weights\nto the value predicted by the quadratic function at a future timestep which was one of 1.25,1.3 or\n1.4 times the current time step. For other higher jump ratios the updates would cause the model to\ndiverge, and lower jump ratios did not show much improvement in performance. The plot showing\nthe comparison in validation accuracy have been shown below in figure|24}\nFigure 24: Comparison of test accuracy for\nMNIST, with weight updations by Intro-\nspection and quadratic fit.\nThe max accuracy achieved with introspection applied was 99.21% whereas with quadratic fit it was\n99.19%. We note that even though the best performing quadratic fit eventually almost reaches the\nsame max accuracy than that achieved with introspection network, it required considerable exper:\nimentation to find the right jump ratio.A unique observation for the quadratic fit baseline was thai\nit would take the accuracy down dramatically, upto 9.8%, from which the training often never re\ncovers. Sometimes,the optimizers (SGD or Adam) would recover the accuracy, as seen in figure[24\nMoreover, the quadratic fit baseline was not able to generalize to other datasets and tasks. The bes!\nperforming jump ratio of 1.25 was not able to outperform Introspection on the CIFAR-10 dataset, a\nseen in figure [25]\nFor the normal training via SGD without any updations after 30000 steps of training, the max ac-\ncuracy of 85.29 was achieved after 26500 steps, whereas the same accuracy was achieved by intro-\nspection after only 21200 steps and after 27000 steps via updation by quadratic.\nFigure 23: Test accuracy comparison for\nMNIST3 for SGD and Adam optimiser in\nthe presence and absence of introspection.\nFigure 25: Comparison of test accuracy for\nCIFAR-10 with weight updations by Intro-\nspection and quadratic fit.\nIn the CIFAR-10 case, The maximum accuracy achieved via updations by introspection was 85.6\nwhich was achieved after 25500 steps, whereas with updations by quadratic fit, the max accuracy of\n85.45 was achieved after 27200 steps.\nInstead of fitting a quadratic curve to each of the weights we tried fitting a linear curve. Experiments\nwere performed on M/ NIST, for jump ratios of 1.1 and 1.075 as the higher ratios would cause the\nmodel to diverge after 2 or 3 jumps.The result has been shown below in figure[26]\nFigure 26: Comparison of test accuracy for\nMNIST, with weight updations by Intro-\nspection and linear fit.\nAs no significant improvement in performance was observed the experiment was not repeated ove:\ncifar."}, {"section_index": "7", "section_name": "4.5 LINEAR INTROSPECTION NETWORK", "section_text": "We removed the ReLU nonlinearity from the introspection network and used the same training\nprocedure of the normal introspection network to predict the future values at 2t. We then used this\nlinear network on the 1/ NIST, network. We found that it gave some advantage over normal SGD.\nbut was not as good as the introspection network as shown in figure|27| Hence we did not explore\n\nthis baseline for other datasets and networks."}, {"section_index": "8", "section_name": "4.5.1 ADDING NOISE", "section_text": "The weight values were updated by adding small gaussian random zero mean noise values . The\nexperiment was performed over MJ NIST; for two different std. value, the results of which have\nbeen shown below in figure[28]\nFigure 28: Test accuracy for 1/ NIST; with weight updations via gaussian noise.\nSince no significant improvement was observed for the weight updations via noise for MNIST, the\nexperiment was not performed over cifar-10.\nSome of the open questions to be investigated relate to determination of the optimal jump points and\ninvestigations regarding the generalization capacity of the introspection network to speed up training\nFigure 27: Validation accuracy plot for\nM NIST, using an introspection network\nwithout nonlinearity\nin RNNs and non-image tasks. Also, we noticed that applying the jumps in very early training step:\nwhile training Alex Net, tended to degrade the final outcomes. This may be due to the fact that ou\nintrospection network is extremely simple and has been trained only on weight evolution data from\nMNIST. A combination of a more powerful network and training data derived from a diverse se\nmay ameliorate this problem.\nWe introduced a method to accelerate neural network training. For this purpose, we used a neura\nnetwork J that learns a general trend in weight evolution of all neural networks. After learning th\ntrend from one neural network training, J is used to update weights of many deep neural nets on |\ndifferent tasks - MNIST, CIFAR-10, and ImageNet, with varying network architectures, activations\noptimizers, and normalizing strategies(batch norm,lrn). Using the introspection network I led t\nfaster convergence compared to existing methods in all the cases. Our method has a small memor\nfootprint, is computationally efficient and is usable in practical settings. Our method is differen\nfrom other existing methods in the aspect that it utilizes the knowledge obtained from weights o\none neural network training to accelerate the training of several unseen networks on new tasks. Th\nresults reported here indicates the existence of a general underlying pattern in the weight evolutio\nof any neural network.\nAlex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009."}, {"section_index": "9", "section_name": "A APPENDIX", "section_text": "In this section, we report some initial results of applying the introspection network J (trained on the\nweight evolution of MNIST network NO) to accelerate the training of inception v1 network (Szegedy\net al.|{2014). We trained the inception v1 network on imagenet dataset with a mini-batchsize of 128\nand a RMS optimizer(decay 0.9, momentum 0.9, epsilon 1.0) starting from a learning rate of 0.01\nwith a decay of 0.94 after every 2 epochs. The network training is still in progress, and we will\neventually report on the final outcome. However we thought it would be valuable to share the\npreliminary results all the same.\nWe found that applying introspection network seems to be reducing the training time quite signif-\nicantly. In Figures [29] and we see that applying the introspection network leads to a gain of\nat least 730,000 steps.After training for around 1.5 million steps, the maximum accuracy achieved\nby normal training was 68.40%, whereas with introspection applied after every 300k steps the max\naccuracy achieved was 69.06%.The network achieved the max accuracy of 68.40% after only 852k\nsteps. With introspection applied at steps 200k, 400k and 600k the max accuracy achieved was\n68.69% and it reached the max accuracy achieved by the normal training of model after only 944k\nsteps.\n[Sw nrapeaerenanne oar ay\n|= Wi orp wens\nFigure 29: Test accuracy plot for Inception\nV1 network with weight updates via intro-\nspection network at steps 2 x 10\u00b0, 4 x 10\u00b0\nand 6 x 10\u00b0(pink curve) and at steps 3 x 10\u00b0,\n6 x 10\u00b0 and 9 x 10\u00b0(blue curve)\nHowever, we also observed that choosing the jump points early in the training does not lead to\neventual gains, even though a significant jump in accuracy is observed initially. Figur\nflattening of the test accuracy after a set of early jumps. It remains to be seen if further interventions\nlater in the training can help maintain the initial accelerated convergence.\nFigure 30: Test accuracy plot for Inception\nV1 network with weight updates via intro-\nspection network at steps 3 x 10\u00b0, 6 x 105\n9x 10\u00b0\nWi inspection network ump step 300K)\nWith introspection network{jump step\n\u2014\u2014 With introspection network{early jumps)\nWithout introspection network\nFigure 31: Test accuracy plots for Inception V1 network with\nweight updates via introspection network in early training\nsteps."}]
HyWDCXjgx
[{"section_index": "0", "section_name": "MULTI-LABEL LEARNING WITH THE RNNSX\nFOR FASHION SEARCH", "section_text": "Se-Yeoung Kim, Sang-II Na, Ha-Yoon Kim, Moon-Ki Kim, Byoung-Ki Jeon\nseyeong, sang.il.na,hayoon,moonki, standard}@sk.com\n{taey.16@navercorp.com}\nWe build a large-scale visual search system which finds similar product image:\ngiven a fashion item. Defining similarity among arbitrary fashion-products i:\nstill remains a challenging problem, even there is no exact ground-truth. To re-\nsolve this problem, we define more than 90 fashion-related attributes, and com.\nbination of these attributes can represent thousands of unique fashion-styles. We\nthen introduce to use the recurrent neural networks (RNNs) recognising multiple\nfashion-attributes with the end-to-end manner. To build our system at scale, thes\u00ab\nfashion-attributes are again used to build an inverted indexing scheme. In additior\nto these fashion-attributes for semantic similarity, we extract colour and appear.\nance features in a region-of-interest (ROI) of a fashion item for visual similarity\nBy sharing our approach, we expect active discussion on that how to apply curren\ndeep learning researches into the e-commerce industry."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "These computer vision researches mainly concern about general object recognition. However, in\nour fashion-product search domain, we need to build a very specialised model which can mimic\nhuman's perception of fashion-product similarity. To this end, we start by brainstorming about\nwhat makes two fashion items are similar or dissimilar. Fashion-specialist and merchandisers are\nalso involved. We then compose fashion-attribute dataset for our fashion-product images. Table\n[I]explains a part of our fashion-attributes. Conventionally, each of the columns in Table{t]can be\nmodelled as a multi-class classification. Therefore, our fashion-attributes naturally is modelled as a\nmulti-label classification.\n\u2018This work was done by the author at SK Planet."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Online commerce has been a great impact on our life over the past decade. We focus on an online\nmarket for fashion related item\u00a2| Finding similar fashion-product images for a given image query\nis a classical problem in an application to computer vision, however, still challenging due to the\nabsence of an absolute definition of the similarity between arbitrary fashion items.\nDeep learning technology has given great success in computer vision tasks such as efficient feature\n\net al] 2016b), detection (Ren et al, [Zhang et al.||2016), and segmentation (\nFurthermore, image to caption generation (Vinyals et al.]/2015| [2015]\ntion answering (VQA) are emerging research fields combining vision, language\n(Mikolov et al.| 2010), sequence to sequence (Sutskever et al.| {2014), long-term memory\n(2016) based modelling technologies.\nTable 1: An example of fashion-attributes.\nMulti-label classification has a long history in the machine learning field. To address this problem, a\nstraightforward idea is to split such multi-labels into a set of multi-class classification problems. In\nour fashion-attributes, there are more than 90 attributes. Consequently, we need to build more than\n90 classifiers for each attribute. It is worth noting that, for example, collar attribute can represent the\nupper-garments, but it is absent to represent bottom-garments such as skirts or pants, which means\nsome attributes are conditioned on other attributes. This is the reason that the learning tree structure\n\nof the attributes dependency can be more efficient (Zhang & Zhang| 2010} Fu et al.| 2012} Gibaja &\nRecently, recurrent neural networks (RNN) are very commonly used in automatic speech recognitiot\n(ASR) (Graves et al_] 2014), language modelling\nword dependency parsing (Mirowski & Vlachos||2015), machine translation (Cho et al.{/2014), anc\ndialog modelling (Henderson et al.| 2016). To preserve long-term dependency\n\nin hidden context, Long-Short Term Memory (LSTM) (Hochreiter & Schmidhuber| and it\nvariants (Zaremba et al.| 2014} Cooijmans et al.| 2016) are breakthroughs in such fields. We use thi:\n\nLSTM to learn fashion-attribute dependency structure implicitly. By using the LSTM, our attribut\nrecognition problem is regarded to as a sequence classification. There is a similar work in [Wan\nlet al.| (2016), however, we do not use the VGG16 network (Simonyan & Zisserman| 2014) as al\nimage encoder but use our own encoder. To the best of our knowledge, it is the first work applyin;\nLSTM into a multi-label classification task in the commercial fashion-product search domain.\nWe start by building large-scale fashion-attribute dataset in the last year. We employ maximum 100\nman-months and take almost one year for completion. There are 19 fashion-categories and more\nthan 90 attributes for representing a specific fashion-style. For example, top garments have the T-\nshirts, blouse, bag etc. The T-shirts category has the collar, sleeve-length, gender, etc. The gender\nattribute has binary classes (i.e. female and male). Sleeve-length attribute has multiple classes (i.e.\nlong, a half, sleeveless etc.). Theoretically, the combination of our attributes can represent thousands\nof unique fashion-styles. A part of our attributes are in Table [I] ROIs for each fashion item in an\nimage are also included in this dataset. Finally, we collect 1 million images in total. This internal\ndataset is to be used for training our fashion-attribute recognition model and fashion-product ROI\ndetector respectively."}, {"section_index": "3", "section_name": "3 FASHION-PRODUCT SEARCH SYSTEM", "section_text": "In this section, we describe the details of our system. The whole pipeline is illustrated in Fig.\nAs a conventional information retrieval system, our system has offline and online phase. In offline\nprocess, we take both an image and its textual meta-information as the inputs. The reason we take\nadditional textual meta-information is that, for example, in Fig. dominant fashion item in the\nimage is a white dress however, our merchandiser enrolled it to sell the brown cardigan as described\nGreat-category Fashion-category Gender Silhouette Collar sleeve-length\n\n(3 classes) (19 classes) (2 classes) (14 classes) (18classes) (6 classes)\nbottom T-shirts male normal shirt long\n\ntop pants female A-line turtle a half\n\nbags : round sleeveless\nThe remaining of this paper is organized as follows. In Sec. We describe details about our\nfashion-attribute dataset. Sec. [3] describes the proposed fashion-product search system in detail.\nSec. (4jexplains empirical results given image queries. Finally, we draw our conclusion in Sec.\nTextual meta-information: Textual meta-informatior\nwomen\u2019s clothes/ brend-new/\n\ncardigan and knit/ women\u2019s shirts, blouse/\nround-neck cardigan see-through blouse\nFigure 1: Examples of image and its textual meta-information\nin its meta-information. In Fig.|1p, there is no way of finding which fashion item is to be sold with-\nout referring the textual meta-information seller typed manually. Therefore, knowing intension (i.e.\nwhat to sell) for our merchandisers is very important in practice. To catch up with these intension,\nwe extract fashion-category information from the textual meta. The extracted fashion-category in-\nformation is fed to the fashion-attribute recognition model. The fashion-attribute recognition model\npredicts a set of fashion-attributes for the given image. (see Fig. These fashion-attributes are\nused as keys in the inverted indexing scheme. On the next stage, our fashion-product ROI detector\nfinds where the fashion-category item is in the image. (see Fig. |8) We extract colour and appear-\nance features for the detected ROI. These visual features are stored in a postings list. In these\nprocesses, it is worth noting that, as shown in Fig. |8| our system can generate different results in\nthe fashion-attribute recognition and the ROI detection for the same image by guiding the fashion-\ncategory information. In online process, there is two options for processing a user-query. We can\nan fe\n\nshoes, #male, #leather, top, #coat, #female, #bottom, #pants, #female, 2 : ;\n#under-ankle, #low-heel, #long-sleeved, #monochrom, #long, #skiny-shilloutte, Piles hier sia arr\nfimonochrom, #ishoelace #tailored-collar, #ar-coat, #mormal-waist, #belt-type, pS)\n#normal-fit #double-button-type #botton-lock, #in-pocket, #fading\nB \u00e9\noites 4\nNEO \u2018\n+\nvd\nHop-bottom, #dress, #female, #bottom, #pants, #male, Hop, #suit-jacket, #male, #shoes, #female, #leather,\n4slim, #mini, #pencil, #long, #sweetpants, #Elastic-waist, __ #ailored-collar, #long-sleeved, #ankle-boot, #high-heel,\n\n#round-neck, #long-sleeved #in-pocket, #sibori, #brend-logo #moder-fit, #two-button #monochrom, #buckle\nFigure 2: Examples of recognized fashion-attributes for given images.\nake a guided information, what the user wants to find, or the fashion-attribute recognition mode\nutomatically finds what fashion-category item is the most likely to be queried. This is up to the\niser's choice. For the given image by the user, the fashion-attribute recognition model generate:\nashion-attributes, and the results are fed into the fashion-product ROI detector. We extract colou!\nind appearance features in the ROI resulting from the detector. We access to the inverted inde\u00bb\n\\ddressed by the generated a set of fashion-attributes, and then get a postings list for each fashion:\n\u2018tribute. We perform nearest-neighbor retrieval in the postings lists so that the search complexity i:\neduced drastically while preserving the semantic similarity. To reduce memory capacity and speec\nip this nearest-neighbor retrieval process once more, our features are binarized and CPU depen.\n|\nab\n\nSearch results\n\nee\n\nNearest-Neighbor\nsearch\n\nColour and appearence\n\nfeature extraction dnverted inde\n\nColour and appearence\nfeature extraction\n\nROI detection\n\nAttribute\n\nROI detection\n\nrecognition\nInform Attribute\nExtrac recognition\n\n2 $38 2: 2\nFigure 3: The whole pipeline of the proposed fashion-product search system. (Dashed lines denote\nthe flows of the guided information.)\ndent intrinsic instruction (i.e. assembly popcnt instruction\u201c) is used for computing the hammin;\ndistance.\nImposing gradients from the loss function to /-th layer to Eq. (2),\nOL\n\nOL Ox'*? Ax!\n\nOx\" Ax!lt+1 Ax!\n\n; OH(x?-!\n\nae IH (x42)\n~ Ox!\nl\n\niat a(t4 a \u2014 ).\n\ni=L-1\n\nOx!\n\n\u2019\nWe build our own vision encoder network (ResCeption) which is based on inception-v3 architecture\n(Szegedy et al.||2016b). To improve both speed of convergence and generalization, we introduce a\nshortcut path (He et al.| ab) for each data-flow stream (except streams containing one convo-\nlutional layer at most) in all inception-v3 modules. Denote input of /-th layer , x! \u20ac R , output of\nthe J-th layer, x'+!, a J-th layer is a function, H : x! ++ x!+1 and a loss function, \u00a3(0;x\u201d). Then\nforward and back(ward)propagation is derived such that\nH(x') +x!\nOH(x')\naxl +1\nAs in the Eq. , the error signal, 557, goes down to the [-th layer directly through the shortcut\npath, and then the gradient signals from a \u2014 1)-th layer to J-th layer are added consecutively (i.e.\nera OHO), Consequently, all of terms in Eq. (3) are aggregated by the additive operation\n\ninstead of the multiplicative operation except initial error from the loss (ie. 245). It prevents\nfrom vanishing or exploding gradient problem. Fig. [4] depicts network architecture for shortcut\nFigure 4: Network architecture for shortcut paths (depicted in two red lines) in an inception-v:\nmodule.\npaths in an inception-v3 module. We use_projection shortcuts throughout the original inception-v.\nmodules due to the dimension constraint/}| To demonstrate the effectiveness of the shortcut paths it\nthe inception modules, we reproduce ILSVRC2012 classification benchmark (Russakovsky et al.\nfor inception-v3 and our ResCeption network. As in Fig. [Sal we verify that residual shortcu\npaths are beneficial for fast training and slight better generalization|] The whole of the trainin,\ncurve is shown in Fig. The best validation error is reached at 23.37% and 6.17% at top-\nand top-5, respectively. That is a competitive result] To demonstrate the representation power o\nour ResCeption, we employ the transfer learning strategy for applying the pre-trained ResCeptio:\nas an image encoder to generate captions. In this experiment, we verify our ResCeption encode\noutperforms the existing VGG16 network\u00ae}on MS-COCO challenge benchmark (Chen et al.|\nThe best validation CIDEr-D score (Vedantam et al.] for c5 is 0.923 (see Fig. [cp and tes\nCIDEr-D score for c40 is 0.937[]\nFigure 5: Training curves on ILSVRC2012 and MS-COCO dataset with our ResCeption model\nThe traditional multi-class classification associates an instance x with a single label a from previ-\nously defined a finite set of labels A. The multi-label classification task associates several finite\nsets of labels A,, C A. The most well known method in the multi-label literature are the binary\nrelevance method (BM) and the label combination method (CM). There are drawbacks in both BM\nWe submitted our final result with beam search on MS-COCO evaluation server and found out the bear\nsearch improves final CIDEr-D for c40 score by 0.02.\nOutputiacpt<onesy\n\n7x1 conv]\n1x7 conv\nt\n7x1 conv 7x1 conv\n1x7 conv] | {1x1 conv. 1x7 conv\nT t il\n1x1 conv.| | |MaxPoot || 1x1 conv.| } | 1x1 cony\ninception-v3 vs. ResCeption =\n\n\u2014\u2014\u2014 train\n=< inception-v3 \u00ae\nval\n\u00b0 s+ ResCeption\n\nVGG16 vs. ResCeption\n\nE\ns \u00bb \u2014 vecis\nal \u2014 _ResCeption\n* Sc a 7 jerations, e a \u201850\nerations steps\n\n(b) The whole training curve on ILSVRC2012\n\n(C) Validation curve on MS-COCO dataset.\ndataset.\n\n@) Early validation curve on ILSVRC2012 dataset.\n\u201cIf the input and output dimension of the main-branch is\nnstead of identity shortcut.\n\nnot the same, projection shortcut should be used\n= fo\nBottom { pnt)\n/ NN\nFashion-attributes: ( vim _\n#Top-Bottom, _/\n#Dress, i\n#Round-neck, ;\n#A half sleeved,\n\n#Knee-length\n\n_ I\n\u00a3(0) = |, max [PO.0q (401901 (Z)) x\nseq (41/40; Go, (I) x\n\nPO, (@2\\@0, 1, G9, (I) \u00ab\nFigure 6: An example of the fashion-attribute dependence tree for a given image and the objectiv\nfunction of our fashion-attribute recognition model.\nand CM. The BM ignores label correlations that exist in the training data. The CM directly takes\ninto account label correlations, however, a disadvantage is its worst-case time complexity\ne 9). To tackle these drawbacks, we introduce to use the RNN. Suppose we have ran-\ndom variables a \u20ac A,,An C A. The objective of the RNN is to maximise the joint probability,\nP(t, Gt\u20141, 4\u00a2\u20142,.-- 0), where t is a sequence (time) index. This joint probability is factorized as a\nproduct of conditional probabilities recursively,\np(az,dt-1,---40) = p(ao)p(a1|ao) p(a2|a1, ao) ++\n(40,41)\nP(a0,41,a2)\nSZ\n\n= p(ao) TI, plailava, +. +, 49).\n8Our attribute recognition model is parameterized as 9 = [@1; Aseq]. In our case, updating 4; as well as Ac\nin the gradient descent step helps for much better performance.\nFollowing the Eq. [4] we can handle multi-label classification as sequence classification which is\nillustrated in Fig. 6] There are many label dependencies among our fashion-attributes. Direct mod-\nelling of such label dependencies in the training data using the RNN is our key idea. We use the\nResCeption as a vision encoder 6;, LSTM and softmax regression as our sequence classifier Aseq,\nand negative log-likelihood (NLL) as the loss function. We backpropagage gradient signal from the\nsequence classifier to vision encoder[\u2019] Empirical results of our ResCeption-LSTM based attribute\nrecognition are in Fig. Many fashion-category dependent attributes such as sweetpants, fad-\ning, zipper-lock, mini, and tailored-collar are recognized quite well. Fashion-category independent\nattributes (e.g., male, female) are also recognizable. It is worth noting we do not model the fashion-\nattribute dependance tree at all. We demonstrate the RNN learns attribute dependency structure\nimplicitly. We evaluate our attribute recognition model on the fashion-attribute dataset. We split\nthis dataset into 721544, 40000, and 40000 images for training, validating, and testing. We employ\nthe early-stopping strategy to preventing over-fitting using the validation set. We measure precision\nand recall for a set of ground-truth attributes and a set of predicted attributes for each image. The\nquantitative results are in Table|2]\nTable 2: A quantitative evaluation of the ResCeption-LSTM based attribute recognition model\nMeasurement Train Validation Test\nPrecision 0.866 0.842 0.841\nRecall 0.867 0.841 0.842\nNLL 0.298 0.363 0.363\nOur prediction model of the fashion-attribute recognition is based on the sequence generation pro\ncess in the RNN . The attribute-sequence generation process is illustrated in Fig\nFirst, we predict a probability of the first attribute for a given internal representation of the im\nage i.e. po,.,(ao|go,(Z)), and then sample from the estimated probability of the attribute, ao -\nP6,-,(40|90; (I)). The sampled symbol is fed to as the next input to compute pg,,,(a1|a0, go, (Z))\nThis sequential process is repeated recursively until a sampled result is reached at the special end\nof-sequence (EOS) symbol. In case that we generate a set of attributes for a guided fashion-category\nwe do not sample from the previously estimated probability, but select the guided fashion-category\nand then we feed into it as the next input deterministically. It is the key to considering for eacl\nseller's intention. Results for the guided attribute-sequence generation is shown in Fig. [8]\nPe.eq (20190; (I)) PO.eq(41|A0; 90; (Z))\n\nLSTM LSTM LSTM tee 0\n\nLSTM LSTM LSTM tee\nry\n\nseq\n\nResCeption :\n90, (I) 20 ~ Pbceq(0|907(Z)) i @1 ~ POseq (41/40, 90; (I)\n\u2018J :\n\nGuided information\nFigure 7: Guided sequence generation process."}, {"section_index": "4", "section_name": "3.4 Guided ROI DETECTION", "section_text": "Our fashion-product ROI detection is based on the Faster R-CNN 2015). In the conven-\ntional multi-class Faster R-CNN detection pipeline, one takes an image and outputs a tuple of (ROI\ncoordinate, object-class, class-score). In our ROI detection pipeline, we take additional informa-\ntion, guided fashion-category from the ResCeption-LSTM based attribute-sequence generator. Our\nfashion-product ROI detector finds where the guided fashion-category item is in a given image.\nalso uses a similar idea, but they train several detectors for each category independently\nso that their works do not scale well. We train a detector for all fashion-categories jointly. Our\ndetector produces ROIs for all of the fashion-categories at once. In post-processing, we reject ROIs\nthat their object-classes are not matched to the guided fashion-category. We demonstrate that the\nguided fashion-category information contributes to higher performance in terms of mean average\nprecision (mAP) on the fashion-attribute dataset. We measure the mAP for the intersection-of-union\n(IoU) between ground-truth ROIs and predicted ROIs. (see Table[3) That is due to the fact that our\nguided fashion-category information reduces the false positive rate. In our fashion-product search\npipeline, the colour and appearance features are extracted in the detected ROIs.\n4\n\n-\n\nGuided fas ion-category: Guided fash ion-category: Guided fashion-category:\n\nGuided fashion-category:\nskirt blouse T-shirt pants\nRecognition results: Recognition results: Recognition results: Recognition results:\n#bottoms, #skirts, #woman, #top, #blous, #woman, #top, #tshirts #woman, #bottom, #pants, #woman,\n#maxi, #pleated-skirts, #waistline, #sleeveless, #normal-fit, #waistline, #long-line, #skiny-shilloutte,\n#no-slit #round-neck #round-neck,\n\n. #normal-waist, #belt-type,\ni#long-sleeved, #striped #button-lock, #in-pocket,\n#roll-up cuff, #fading\n\n. =\nGuided fashion-category. Guided fashion-categor} Guided fashion-category: Guided fashion-category:\ndress leggings shirt dress\n\nRecognition results: Recognition results: Recognition results: Recognition results:\n#top-bottom, #dress, #bottom, #leggings, #women, #top, #shirt, #women, #top-bottom, dress, #women,\n#woman, #mini, #long #loose-fit, #button-lock, #mini, #slim-fit, #straight-skirt\n#regular-fit, #round-neck, #pullover, #collared shirt, #round-neck, #long-sleeved\n\n#long-sleeved #long-sleeved\nFigure 8: Examples of the consecutive process in the guided sequence generation and the guided\nROI detection. Although we take the same input image, results can be totally different guiding the\nfashion-category information.\nTable 3: Fashion-product ROI Detector evaluation. (mAP)"}, {"section_index": "5", "section_name": "3.5 VISUAL FEATURE EXTRACTION", "section_text": "To extract appearance feature for a given ROI, we use pre-trained GoogleNet (Szegedy et al.}[2015)\n\nIn this network, both inception4 and inception5S layer's activation maps are used. We evaluat\nthis feature on two similar image retrieval benchmarks, i.e. Holidays (2008) anc\nUK-benchmark (UKB) (Nist\u00e9r & Stew\u00e9nius| |2006). In this experiment, we do not use any post\nprocessing method or fine-tuning at all. The mAP on Holidays is 0.783, and the precision@4 anc\nrecall@4 on UKB is 0.907 and 0.908 respectively. These scores are competitive against several deey\nfeature representation methods (Razavian et al. 2014} Babenko et al. 2014). Examples of querie:\nand resulting nearest-neighbors are in Fig. On the next step, we binarize this appearance fea:\nture by simply thresholding at 0. The reason we take this simple thresholding to generate the hast\ncode is twofold. The neural activation feature map at a higher layer is a sparse and distributed code\nin nature. Furthermore. the bias term in a linear laver (e.\u00a2.. convolutional laver) compensates fo!\nIoU 0.5 0.6 0.7 0.8 0.9\n\nGuided 0.877 0.872 0.855 0.716 0.225\nNon-guided 0.849 0.842 0.818 0.684 0.223\na> |e aa aa al gl as Bie\" et\nce eee as | M/S PR\nPm | Pie eke Oe ats\n\npi | Pn pape sm Mn nd\n\nbss | cr EM es Sl SRY as 0\nfd | ib! <p sw cal\nui aati nl ill\nFigure 9: Examples of retrieved results on Holidays and UKB. The violet rectangles denote the\nground-truth nearest-neighbors corresponding queries.\nTo evaluate empirical results of the proposed fashion-product search system, we select 3 million\nfashion-product images in our e-commerce platform at random. These images are mutually ex-\nclusive to the fashion-attribute dataset. We have again selected images from the web used for the\nqueries. All of the reference images pass through the offline process as described in Sec. B} and\nresulting inverted indexing database is loaded into main-memory (RAM) by our daemon system.\nWe send the pre-selected queries to the daemon system with the RESTful API. The daemon system\nthen performs the online process and returns nearest-neighbor images correspond to the queries.\nIn this scenario, there are three options to get similar fashion-product images. Option | is that\nthe fashion-attribute recognition model automatically selects fashion-category, the most likely to be\nqueried in the given image. Option 2 is that a user manually selects a fashion-category given a query\nimage. (see Fig. Option 3 is that a user draw a rectangle to be queried by hand like{Jing et al.\n) By the recognized fashion-attributes, the retrieved results reflect the user's\nmain needs, e.g. gender, season, utility as well as the fashion-style, that could be lacking when using\nvisual feature representation only.\naligning zero-centering of the output feature space weakly. Therefore, we believe that a code from\na well-trained neural model, itself, can be a good feature even to be binarized. In our experiment,\nsuch simple thresholding degrades mAP by 0.02 on the Holidays dataset, but this method makes\nit possible to scaling up in the retrieval. In addition to the appearance feature, we extract colour\nfeature using the simple (bins) colour histogram in HSV space, and distance between a query and\na reference image is computed by using the weighted combination of the two distances from the\ncolour and the appearance feature.\n, a , mae Hs o\n\nah\n= i\n=e a = =\n\n(b) For the option 2, the guided information is\n(b) For the option 2, the guided information is \u201cblouse\u201d.\nFigure 10: Similar fashion-product search for the Option 1 and the Option 2.\nFigure 11: Similar fashion-product search for the Option 3."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Artem Babenko, Anton Slesarev, Alexander Chigorin, and Victor S. Lempitsky. Neural codes fc\nimage retrieval. CoRR, abs/1404.1777, 2014.\nTim Cooijmans, Nicolas Ballas, C\u00e9sar Laurent, and Aaron C. Courville. Recurrent batch normal.\nization. CoRR, abs/1603.09025, 2016.\nBin Fu, Zhihai Wang, Rong Pan, Guandong Xu, and Peter Dolog. Learning tree structure of label\ndependency for multi-label learning. Advances in Knowledge Discovery and Data Mining, 2012.\nEva Gibaja and Sebastian Ventura. A tutorial on multilabel learning. The ACM Computing Surveys\n2015.\n2\n\n\u2018>\n\nPs > \u2018 Rs\n\nos\n\n\u2018=\n\n*&>\nToday 's deep learning technology has given great impact on various research fields. Such a success\nstory is about to be applied to many industries. Following this trend, we traced the start-of-the art\ncomputer vision and language modelling research and then, used these technologies to create value\nfor our customers especially in the e-commerce platform. We expect active discussion on that how\nto apply many existing research works into the e-commerce industry.\nXinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and\nC. Lawrence Zitnick. Microsoft COCO captions: Data collection and evaluation server. CoRR,\nabs/1504.00325, 2015.\n<yungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the propertie:\nof neural machine translation: Encoder-decoder approaches. CoRR, abs/1409.1259, 2014.\nM. Henderson, B. Thomson, and S. J. Young. Word-based Dialog State Tracking with Recurren\nNeural Networks. In The Annual SIGdial Meeting on Discourse and Dialogue, 2014.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural Computation, 1997.\nJonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic\nsegmentation. In The IEEE Conference on Computer Vision and Pattern Recognition. 2015.\nclassification. In The European Conference on Machine Learning and Knowledge Discovery in\nDatabases, 2009.\n\nShaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object\ndetection with region proposal networks. In Advances in Neural Information Processing Systems\n28, 2015.\n\nOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng\nHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.\nImageNet Large Scale Visual Recognition Challenge. The International Journal of Computer\nVision, 2015.\n\nJulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau.\nBuilding end-to-end dialogue systems using generative hierarchical neural network models. In\nThe AAAI Conference on Artificial Intelligence, 2016.\n\nK. Simonyan and A. Zi\n\nrman. Very deep convolutional networks for large-scale image recogni-\n<aiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog\nnition. In The IEEE Conference on Computer Vision and Pattern Recognition, 2016a.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual\nnetworks. arXiv preprint arXiv: 1603.05027, 2016b.\nPiotr Mirowski and Andreas Vlachos. Dependency recurrent neural language models for sentence\ncompletion. CoRR, abs/1507.01193, 2015.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-\nmitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In\nThe IFFF Conferenre on Comnuter Vision and Pattern Reroonitinn 9015\nJiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, and Wei Xu. CNN-RNN: 4\nunified framework for multi-label image classification. CoRR, abs/1604.04573, 2016.\nMin-Ling Zhang and Kun Zhang. Multi-label learning by exploiting label dependency. In The ACM\nInternational Conference on Knowledge Discovery and Data Mining, 2010."}]
rJEgeXFex
[{"section_index": "0", "section_name": "PREDICTING MEDICATIONS FROM DIAGNOSTIC\nCODES WITH RECURRENT NEURAL NETWORKS", "section_text": "Jacek M. Bajor, Thomas A. Lasko\nfjacek.m.bajor,tom. lasko}@vanderbilt.edu\nIt is a surprising fact that electronic medical records are failing at one of their pri:\nmary purposes, that of tracking the set of medications that the patient is actively\ntaking. Studies estimate that up to 50% of such lists omit active drugs, and tha\u2019\nup to 25% of all active medications do not appear on the appropriate patient list\nManual efforts to maintain these lists involve a great deal of tedious human labor\nwhich could be reduced by computational tools to suggest likely missing or in:\ncorrect medications on a patient\u2019s list. We report here an application of recurren\u2019\nneural networks to predict the likely therapeutic classes of medications that a pa:\ntient is taking, given a sequence of the last 100 billing codes in their record. Ou\nbest model was a GRU that achieved high prediction accuracy (micro-averagec\nAUC 0.93, Label Ranking Loss 0.076), limited by hardware constraints on mode\nsize. Additionally, examining individual cases revealed that many of the predic\ntions marked incorrect were likely to be examples of either omitted medications\nor omitted billing codes, supporting our assertion of a substantial number of er:\nrors and omissions in the data, and the likelihood of models such as these to helf\ncorrect them."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The idea of exploiting the large amounts of data captured in electronic medical records for bott\nclinical care and secondary research holds great promise, but its potential is weakened by errors anc\n\nomissions in those records (Safran et al. de Lusignan & van Weel}/2006). Among many othe:\n\nproblems, accurately capturing the list of medications currently taken by a given patient is extremel}\n\nchallenging (Velo & Minuz! ). In one study, over 50% of electronic medication lists containec\nomissions (Caglar et al. , and in another, 25% of all medications taken by patients were no\nrecorded (Kaboli et al.|/2004). Even medication lists provided by the patients themselves contair\n\nmultiple errors and\n\nomissions (Green et al.]{2010) .\nMany efforts have been made to ensure the correctness of medication lists, most of them involvin:\nimproved communication between patients and providers , but these effort\nhave not yet been successful, and incorrect or incomplete medication documentation continues t\nbe a source of error in computational medical research. In this work we attempt to identify likel\nerrors and omissions in the record, predicting the set of active medications from the sequence o\nmost recent disease-based billing codes in the record. Predictions from such a model could be use\n2ither in manual medication reconciliation (a common process undertaken to correct the medicatioi\nrecord) or to provide a prior to other models, such as an NLP model attempting to extract medicatioi\nuse from the narrative clinical text.\nGiven the sequential nature of clinical data, we suspected that recurrent neural networks would be a\ngood architecture for making these predictions. In this work we investigate this potential, comparing\nthe performance of recurrent networks to that of similarly-configured feed forward networks.\nThe input for each case is a sequence of ICD-9 billing codes (Section |2.1), for which the mode\nproduces a single, multi-label prediction of the therapeutic classes (Section 3.1) of medication:\ntaken by the patient during the period of time covered by the billing code sequence."}, {"section_index": "2", "section_name": "2.1 MEDICAL BILLING CODES", "section_text": "Each time a patient has billable contact with the healthcare system, one or more date-stamped billing\ncodes are attached to the patient record, indicating the medical conditions that are associated (01\nsuspected to be associated) with the reason for the visit. While these codes are notoriously unreliable\nbecause they are only used for billing and not actual clinical practice (O\u2019 Malley et al.}|2005), they are\nnevertheless useful in a research context (Bastarache & Denny Denny et al.||2010), especially\nif they are used probabilistically (Lasko} |2014). In our institution, codes from the Internationa\nClassification of Diseases, Ninth Revision (ICD-9) have historically been used, although we have\nrecently transitioned to the tenth revision (ICD-10). For this project, we used ICD-9 codes.\nA recurrent neural network is a variation in which the output of one node on input x; loops aroun\nto become an input to another node on input x;1, allowing information to be preserved as it iterate\nover an input data sequence (Figure[I). They were introduced in the 1980s\nbut achieved explosive popularity only recently, after the development of methods to more reliabl\ncapture long-term dependencies, which significantly improved their performance on sequence-to\n\nsequence mapping (Hochreiter & Schmidhuber\nThe basic RNN unit has a simple internal structure (Figure|2). Output from the previous iteratior\nhy_1 and the next input in a sequence 2, are both fed to the network on the next iteration. The\nLong Short-Term Memory configuration (LSTM) introduces new, more complex internal structur\n(Figure 2p) consisting of four neural network layers and a cell state (c;), which is carried from one\niteration to another. The additional layers form forget, input and output gates, which allow for th\ninformation to be forgotten (reset) or passed on to varying degrees.\nThe LSTM model and its variations are commonly used in applications where sequence and temporal\ndata are involved, such as in image captioning (Vinyals et al.|/2014), language translation (Sutskever\nfet al.|{2014), and speech recognition (Graves et al.||2013). In many cases LSTM models define the\nstate of the art, such as with a recent conversational speech recognizer that (slightly) outperforms\n\nprofessional transcriptionists (Xiong et al.|[2016).\nA recent variation on the LSTM architecture is the Gated Recurrent Unit (GRU)\nwhich introduces a single update gate in place of input and forget gates (Figure|2|\n\nas well as or better than LSTMs in many cases (Chung et al. /2014; |Jozefowicz et al.\n\nhave the additional advantage of a simpler structure.\n\n2014\np>). GRUs perfor\n\n2015), and\nIn this work we try both an LSTM and a GRU on our learning problem.\nLittle research in the computational medical domain has used recurrent neural networks. The ear-\nliest example we are aware of is the use of an LSTM model that produced reasonable accuracy\nThis work is designed to test how well the complete set of medications a patient is actively taking at\n1 given moment can be predicted by the sequence of diagnostic billing codes leading up to that mo-\nment, in the context of non-trivial label noise. It also explores whether sequence-oriented recursive\nneural nets can do a better job of that prediction than standard feed-forward networks.\nThe ICD-9 hierarchy consists of 21 chapters roughly corresponding to a single organ system or\n\npathologic class (Appendix B). Leaf-level codes in that tree represent single diseases or disease\nsubtypes. For this project, we used a subset of the two thousand most common leaf-level codes as\nour input data.\nMost of the ICLR community are very familiar with recurrent neural networks and their variations,\nbut we include a conceptual description of them here for readers coming from other fields. More\nthorough descriptions are available elsewhere (Graves| {2012} {Olah} 2015).\nrecurrent recurrent | 4, |recurrent recurrer\nunit unit unit unit\n\nre\nunit\nFigure 1: Simplified representation of a recurrent neural network (left) and an unrolled recurrent\nneural network (right). x; is a single element in an input sequence x, h; is an output after a single\npass through the recurrent unit. Adapted from|Olah](2015).\nFigure 2: Architectures of (a) Simple RNN, (b) LSTM, and (c) GRU units. x;: a single element in\nan input sequence being considered in the current iteration, h,_1, h,: the output from the previous\nand current iterations, c;_1,c,: the cell states of the previous and current iterations. Adapted from\n\n{Olah} (2015).\n(micro-AUC 0.86) in a 128-dimensional multi-label prediction of diagnoses from regularly sam-\npled, continuously-monitored, real-valued physiologic variables in an Intensive Care Unit setting.\nThis was an interesting initial application, but it turned out to be only 0.001 better than the baseline\nclassifier, which was a multi-layer perceptron with expert-designed features (Lipton et al.| [2016).\nGiven the dataset size (10,401 patient records) the lack of improvement may have been due to insuf-\nficient data to power accurate feature learning in the recurrent network.\nVery recent work, contemporary with ours, used a GRU model with a semantic embedding in 32,787\npatient records to predict the development of heart failure 3 - 6 months in the future, from medicatior\norders and billing codes in an 18-month window. The model achieved respectable accuracy (0.88\nAUC), and demonstrated a meaningful 0.05 AUC improvement over a deep feedforward network\n\n(Choi et al]|2016b).\nOther recent work from the same group used a GRU model in a multi-label context to predict the\nmedications, billing codes, and time of the next patient visit from a sequence of that same infor.\nmation for previous visits, using 263,706 patient records. It achieved a recall@30 of 72.4 for the\ntask, an improvement of 20 over a single-hidden-layer MLP with 2000 units\nThis is an example of using one of the strengths of a recurrent network - predicting the next elemen\nin a sequence. It contrasts with our work that exploits a different strength of recurrent networks -\npredicting a sequence or class that is semantically distinct from but parallel to the elements of the\ninput sequence.\nThe closest work to ours from a medical domain perspective is a series of collaborative filter model:\n(including co-occurrence counting, k-nearest neighbors, and logistic regression) that predict missing\nmedications using a leave-one-drug-out evaluation design, with predictions based on the rest of th\nmedications, ICD-9 billing codes, and demographic data. The models were trained and tested o1\ndata from 419 patients in three different clinics, with accuracy varying by clinic, as expected, bu\nnot appreciably by model. Most models ranked the missing drug in the top 10 results between 4(\nand 50% of the time, and ranked the therapeutic class of the drug in the top 10 results between 5(\nand 65% of the time.\nMany aspects of our work can be found in these prior efforts, but none addresses our particula1\nproblem in the same way. Our work is unique in its learning problem of identifying all drugs <\npatient is likely to be taking, based only on the billing codes in the record. Like most others cited, we\nuse recurrent neural networks in a multi-label predictive context, but in contrast to them we compare\nrecurrent recurrent | \"1 |recurre rer\nunit unit unit unit\n\nre\nunit\nb) \u00b0)\nhy Cos L(x} \u201cA irs me 5 - :\n| \u00a9 ;\n<>)\n\u00a9) (x. /\ntanh CY Ll L\noe nee = tanh\n\u2014\u2014\u2014\u2014\u2014 ; j\n%"}, {"section_index": "3", "section_name": "3.1 DATA", "section_text": "Our source database was the deidentified mirror of Vanderbilt\u2019s Electronic Medical Record, which\ncontains billing codes, medication histories, laboratory test results, narrative text and medical imag.\n\ning data for over 2 million patients, reaching back nearly 30 years (Roden et al.|/2008). We obtainec\nIRB approval to use this data in this research.\nFor this experiment we filtered all records in our database to include only the top 1,000 most common\nmedications and the top m = 2000 most common billing codes, which cover 99.5% of all medication\noccurrences and 85.1% of all billing code occurrences. We then included all records from the filtered\ndata that had at least one medication occurrence and at least ten billing code occurrences. This\nresulted in 610,076 complete patient records, which we divided 80/5/15 into training, validation,\nand final test sets.\nA data instance d = {E\u00a3,T,y} consisted of a sequence E = {e1,..., \u00e9n}, of one-hot billing cod\nvectors e; \u20ac {0,1} and their associated times T = {t),...,tn},\u00a2; \u20ac Ras input, and a multi-labe\nvector y \u20ac {0, 1}* of medication classes as the output target. The most recent n = 100 billing code:\nto a selected reference time point in a given patient record were collected into the input sequence E\nand their occurrence times into 7\u2019, zero padding if necessary. All medications that occurred durin;\nthe time span of T were then collected into the output vector y. Practice patterns change over time\nso simply taking the most recent 100 codes for each patient could produce a biased result. To avoic\nthis, we chose random reference points, stratified by medication. In other words, the reference point\nwere randomly chosen from the occurrences of each medication in the entire dataset, up to 10,00\npoints per medication. This resulted in 3.3 million data instances, an average of 5.4 instances pe\npatient record. Each patient\u2019s data was included in at most one of the training, validation, or tes\nsets.\nBecause there are often many approximately equivalent medication choices for a given therapeutic\npurpose, we converted medication names to their therapeutic class (beta blocker, immunosuppres:\nsant, corticosteroid, etc.) as a synonym reduction step. This step also aggregated generic with branc\nnames, as well as different formulations of the same active ingredient. For this task we used the\nAnatomical Chemical Classification System catcf] which is a multi-level ontology of medica\ntions, organized by both anatomic and therapeutic class. The top level is a broad categorization 0:\nmedications (Appendix Bp, the bottom (fifth) level is individual medications, and we used the thir\u00a2\nlevel, which contains 287 therapeutic classes of the approximately appropriate abstraction level fo:\nour purpose. We used a publicly available mapping\u201c|to translate between our medication names anc\nATC codes, with manual mapping for the minority of medications that had no mapping entry. Ow\nset of medications used k = 182 third-level ATC codes, rendering our output label a 182-element\nlong multi-label vector, in which an element is set y; = 1 if a medication in that class appeared ir\nthe set of medications identified for that instance, y; = 0 otherwise. Some medications mapped t\nmore than one class. and we set y; = 1 for all of them.\nication data was collected from structured order entry records and extracted using NLP (Xt\n) from mentions in the narrative text of a patient record that included the medicatior\nname, dose, route and frequency. As discussed above, we assumed (and our results demonstrate\u2019\nthat the medication data is incomplete, and our hope was that a model learned from a sufficiently\nlarge dataset will be robust to the missing data.\n\"http://www.whocc.no/atc/structure_and_principle\n*https://www.nlm.nih.gov/research/umls/rxnorm/\nto the most similar non-recurrent model we can construct, in order to evaluate the contribution of\nthe temporal sequence information to the solution. Finally, we use one to four orders of magnitude\nmore data (3.3 million instances, see Section B.1p than these prior efforts, which we hope will give\n4s a more realistic assessment of the various deep architectures we use on our problem.\nThis configuration represents the input billing codes in a sequence, but the output medications as\na multi-label vector. This is because ICD-9 codes are represented sequentially in our source data,\nbut medications are not. They are represented as a list that changes over time in the record. The\nusual goal of clinicians is to verify the list of medications at each visit, and if omissions or addition\nare indicated by the patient, to change the list to reflect that. But in the time-constrained realit\nof clinical practice, this reconciliation happens sporadically, and many clinicians are hesitant t\nchange an entry on the medication list for which they were not the original prescriber, so the timin\nof the changes in the documentation do not reflect the timing of changes in reality. Therefore w\nare reduced to predicting a single multi-label vector, representing the medications that the patier\nprobably took during the span of time represented by the input codes. (We actually did attemr\nsome full sequence-to-sequence mappings, with various orderings of the medication sequences, bu\nwe did not achieve any promising results in that direction.)"}, {"section_index": "4", "section_name": "3.2.1 RECURRENT NEURAL NETWORKS", "section_text": "The optimal hyperparameters for the model were selected in the randomized parameter optimizatior\n(Bergstra & Bengio] |2012), with the embedding dimension b = 32, number of layers, and number\nof nodes optimized by a few trials of human-guided search. Other optimized parameters includec\nthe fraction of dropout (between layers, input gates and recurrent connections), and L1 and L2\n\nregularization coefficients (final values are presented in|Appendix A).\nBoth models were implemented using Keras (Chollet}|2015) and trained for 300 iterations using\n\ncross-entropy under the Adadelta optimizer (Zeiler||2012).\nYi | Yo} Ya] ee [Ye\n\nrecurrent | 1\nlayers\n\nP| Xt | Xo | Xs. X00\ntlt] t took\namp\neee Jeo Jt. Jt]t | eee [to] eee Jeno tlt] e+ [tro\nFigure 3: Recurrent (left) and feed-forward (right) neural network architectures. Arrows indicate the\nflow of information. Input for both models is sequence of billing code observations e and sequence o!\ncorresponding timestamps t. A code observation e; passes through an embedding layer, producing\nan embedding vector x;, which is then appended with time t. The processed matrix then passe:\nthrough either recurrent layers or feed-forward layers. The output in both cases is a single vector 4\nof label probabilities.\nOur main technical goal was to test the performance of recurrent neural networks on this sequence-\ncentric prediction problem. To evaluate the specific gains provided by the recurrent architectures,\nwe compare performance against a fully connected feed-forward network configured as similarly\nas possible to the recurrent networks, and (as baselines) a random forest and a constant-prevalence\nmodel. We discuss the specific configurations of these classifiers in this section.\nWe tested both LSTMs and GRUs in this experiment. We configured both architectures to first\ncompute a semantic embedding x; \u20ac R? of each input e; vector, before appending the times t;\n(Figure[3) and feeding the result to three layers of recurrent units. The final output from the last pass\nof recurrent unit is as a multi-label prediction for each candidate medication."}, {"section_index": "5", "section_name": "3.2.2. FULLY CONNECTED NEURAL NETWORK", "section_text": "The fully connected network used as similar an architecture as possible to the recurrent networks, in\nan attempt to isolate the gain achieved from the recurrence property. Specifically, we used the same\narchitecture for embedding and timestamp appending (Figure[3).\nThe models were also implemented using Keras, and were trained using cross-entropy for 500 iter\nations under the Adadelta optimizer."}, {"section_index": "6", "section_name": "3.2.3 RANDOM FOREST", "section_text": "Because the random forest model is not easily structured to operate on sequences, we represented\nthe input data as either binary occurrence vectors v \u20ac {0,1}\u2019\", or bag-of-codes vectors w \u20ac N\u2122\n(counts of each code value in the sequence) rather than as sequences of codes with associated times.\nNo embedding was used, because random forest code was not able to cope with the large size of the\ndata in the (dense) embedded space.\nModels were implemented using scikit-learn (Pedregosa et al.||2011) with parameters optimize\nunder random search (Appendix A).\nWhile other models could reasonably serve as a baseline for this work, we chose a random forest\n\nbecause they tend to perform well on widely varying datasets (Fernandez-Delgado et al.|/2014), they\n\nare efficient to train and test, and they don\u2019t require a huge effort to optimize (in order to produce a\nfair comparison).\nThis minimum baseline model simply predicts the prevalence of each label for all instances. For\nexample, if there were three possible medications, with prevalences of 0.3, 0.9, and 0.2, then the\nprediction of this model would be a constant [0.3, 0.9, 0.2] for each instance. We include this model\nin order to mitigate the fact that while all of our evaluation measures are suitable for comparing\nmodels on the same data, some are not well suited for external comparison because they depend, for\nexample, on the prevalence of positive labels (Section[3.4p. By including this model we can at least\nestablish a true minimum baseline for reference."}, {"section_index": "7", "section_name": "3.4 EVALUATION", "section_text": "Our main evaluation focused on the models, although we also performed a separate evaluation of\nthe embedding.\nHyperparameters were optimized using random search over the number of layers, number of nodes,\ndropout, activation function between layers, L1 and L2 regularization coefficients (Appendix A).\n(Surprisingly, the optimizer chose tanh over ReLU as the optimal activation function.)\nEven in the (sparse) original space, the full dataset was too large for the random forest code, so we\nimplemented it as an ensemble of ten independent forests, each trained on one tenth of the training\ndata, and their average score used for test predictions.\nThere are several possibilities for evaluation in a multi-label classification context\n2011} (Zhang & Zhou} |2014). We chose micro-averaged area under the ROC curve (AUC) and Ia-\npel ranking loss as the primary methods of evaluation, because they treat each instance with equal\nweight, regardless of the nature of the positive labels for that instance. In other words, we wanted\nprimary measures that did not give a scoring advantage to instances with either very many or very\nfew positive labels, or that included very rare or very prevalent labels. Additionally, both of these\nmeasures appeal to us as intuitive extensions of the usual binary AUC, when seen from the perspec-\n\u2018ive of a single instance. However, because these two measures don\u2019t reflect all aspects of multi-label\nprediction performance, we also include macro-averaged AUC, label ranking average precision and\ncoverage error measures.\nMicro-averaged AUC considers each of the multiple label predictions in each instance as either true\nor false, and then computes the binary AUC as if they all belonged to the same 2-class problem\n\n(Zhang & Zhou}/2014). In other words, micro-averaged AUC <A, is:\nSKS )\nLabel ranking loss Lz gives the average fraction of all possible (positive, negative) label pairs fo\neach instance in which the negative label has a higher score than the positive label (Tsoumakas et al.\n\n2010):\nLr= = ==. r\nR\nd >= I{\u00ab@ ' j Y Y\nNs 1 [YOY O| GM): Ol \u2018\n>rA LY), (LU) \u20ac G)\n; x oh\nMacro-averaged AUC can be thought of as averaging the AUC performance of several one-vs-al\nclassifiers, one model for each label. It treats each model equally, regardless of the prevalence o!\npositive labels for that model. This gives a score of 0.5 to the constant-prevalence model, at the\ncost of weighting instances differently in order to achieve that. This is in contrast to micro-averagec\nAUC, which can be thought of as averaging across instances rather than labels. It weighs eact\ninstance equally, at the cost of a 0.5 score no longer being the random-guessing baseline.\nLabel ranking average precision gives the mean fraction of correct positive labels among all positive\nlabels with lower scores for each label. The coverage error function calculates the mean number of\nlabels on the ranked list that are needed to cover all the positive labels of the sample. Both of these\ndepend on the prevalence of positive labels in a test instance."}, {"section_index": "8", "section_name": "| RESULTS AND DISCUSSION", "section_text": "The GRU model had the top performance by all measures, apt (Chang etal the LSTM was a close secon\n(Table [I}, a performance pattern consistent with previous reports (Chung et al.|{2014). The deey\nneural net performance was about 0.01 worse in both measures, suggesting that the recurrent model\nwere able to use the sequence information, but only to a small advantage over the most similar non\ntemporal architecture. However, we note that both RNNs\u2019 performance peaked at the top end of ou\ntractable range for model size, while the feed-forward network peaked using a model about one thir\nthat size (Appendix A). Experimenting with the architecture, we found that increasing the numbe\nof nodes or layers for the feed-forward network increased training time but not performance. Thi\nsuggests that the RNN performance was limited by the hardware available, and increasing the siz\nof the model may further increase performance, and that the feed-forward network was limited b:\nsomething else.\nBoth random forest models were weaker than the deep neural net, as might be expected from th\nneed to resort to binary and bag-of-codes representations of the input data.\nN\nVD yar {@2) sr D(L) > rAV), (IU) YO x YO }\n\u2018 yoyo\nWe evaluated the embedding based on how strongly related in a clinical semantic sense\nthe nearest neighbor to each code is (in the embedding space). A licensed physi-\ncian manually annotated the list of all 2000 codes with its match category m \u20ac\n{strongly related,loosely related,unrelated}, and we computed the empirical\nmarginal probability P(m) of each category, the empirical conditional probability P(m|d) of the\nmatch category given the nearest neighbor (Manhattan) distance d and the empirical marginal prob-\nability P(d). For comparison, we computed P(m) under 100 random code pairings.\nTable 1: Results of multi-label classification for each model. Baseline is the constant-prevalence\nmodel. Perfect is the best possible performance for our data under the given measure.\nA natural question is what performance is good enough for clinical use. While there is little clinical\nexperience with multi-label classifiers, we would generally expect clinicians using a binary classifier\nin an advisory role to find an AUC = 0.9 to be useful, and AUC 2 0.95 to be very useful. An AUC\ndifference of 0.01, and perhaps 0.005 are potentially noticeable in clinical use.\nThis 0.9/0.01 rule of thumb may loosely translate to our AUC variants, but it can directly translate\nto Label Ranking Loss Lr 2}. If we think of a single output prediction g \u20ac [0,1]* as a set of\npredictions for k binary labels, then 1 \u2014 AUC for that set of predictions is equivalent to Lz for the\noriginal instance j. Therefore, values of Lz < 0.1 may be clinically useful, and Lr < 0.05 may be\nvery useful.\nA good example of missing medications is a case in which the record has multiple billing code:\nfor both osteoporosis (which is very commonly treated with medication) and postablative hypothy-\nroidism (a deliberately induced condition that is always treated with medication), but no medication:\nof the appropriate classes were in the record. The GRU model predicted both of these classes, whict\nthe patient was almost surely taking.\nA good example of either missing billing codes or discontinued medications that remain documentec\nas active is a case in which the record has at least five years of data consisting only of codes fo!\nParkinson\u2019s disease, but which lists medications for high cholesterol, hypertension, and other hear\ndisease. The GRU model predicted a reasonable set of medications for Parkinson\u2019s disease and it:\ncomplications, but did not predict the other medications that are not suggested by the record.\nGiven how easy it was to find cases with apparently missing codes and medications, we conclude\nthat there is indeed a substantial amount of label noise in our data, and we therefore interpret out\nmodels\u2019 performance as lower bounds on the actual performance. We are encouraged that this kind\nof a model may actually be useful for identifying missing medications in the record, but of course\na more thorough validation, and possibly a more accurate model, would be necessary before using\nin a clinical scenario. A definitive experiment would use off-line research, including reconciling\ninformation from various electronic and human sources to establish the ground truth of which med-\nications were being taken on a particular day, but such efforts are labor intensive and expensive, and\ncan only be conducted on a very small scale.\nAn interesting byproduct of these models is the semantic embedding of ICD-9 codes used in the\nrecurrent networks (Figure [5). Transforming input to a semantic embedding is a common pre-\nLabel Ranking Label Ranking Coverage\n\nModel Micro-AUC Loss Macro-AUC _ Avg. Precision Error\nGRU 0.927 0.076 0.861 0.603 62.6\nLSTM 0.926 0.077 0.859 0.600 63.0\nNN 0.916 0.086 0.835 0.570 67.3\nRF (binary) 0.903 0.102 0.804 0.523 73.7\nRF (counts) 0.894 0.111 0.787 0.497 773\nBaseline 0.828 0.172 0.500 0.355 97.2\n\nPerfect 1.0 0.0 1.0 1.0 15.0\nSubjectively examining performance on 20 randomly selected cases, we find very good detailed\npredictions, but also evidence of both missing medications and missing billing codes. An example\nof a good set of detailed predictions is from a complex patient suffering from multiple myeloma (a\ntype of cancer) with various complications. This patient was taking 26 medications, 24 of which\nhad moderate to high probability predictions (Figure[4). (We have found by eyeball that a prediction\ncutoff of 0.2 gives a reasonable balance between sensitivity and specificity for our model.) In the\nother direction, only two of the high-prediction classes were not actually being taken, but those\nclasses, along with several of the other moderately-predicted classes, are commonly used for cancer\nand are clinically reasonable for the case. (Details of this and the two cases below are in|Appendix\n\nCh.\nso1c sozp $038\n\nLo cosa DOTAL son HO2A sox |\nsoia\nan2e | | |\nTle\n08 | Loan\nos | |\n04 | joim\n| a\n02 |\nFigure 4: Medication predictions for a complicated patient. Each vertical bar represents the pre:\ndiction for a single medication class, with the height of the bar representing the confidence of the\nprediction. Black labels with arrows indicate ATC therapeutic classes for medications the patien\nwas actually taking. Colors and letters below the axis indicate organ system groups. More detail ir\n\n[Appendix C|\nprocessing step to improve performance, but clearly the semantic understanding it provides to ar\nalgorithm can be useful beyond the immediate learning problem (Mikolov et al.|[2013). Investigating\nthe embedding learned in this experiment shows some generalizable potential, but it also reveals the\nneed for further refinement before it can be truly useful. Specifically, while it\u2019s easy to find tigh\ngroups of ICD-9 codes that are strongly clinically related in our embedding, we also find groups fo!\nwhich we cannot see a meaningful clinical relationship.\nFor example, we see two groups of codes relating to kidney failure and diabetes mellitus, two classes\nof very prevalent disease (Figure[5] insets). In other iterations with different parameter settings, the\nkidney failure codes were even embedded in a sequence reflecting the natural progression of the\ndisease, with the code for dialysis (an intensive treatment for end-stage kidney failure) embedded\nat the appropriate place. Interestingly, these were not the parameter settings that optimized overall\nprediction performance. In other settings, such as our performance-optimal setting, the sequence\nis close to the natural progression of the disease, but not quite identical. Nevertheless, this is an\nexciting result that suggests great potential.\n\u2018urther evaluation of the embedding found that 49% of codes were strongly related semanticall\n) their nearest neighbor, 10% were loosely related, and 41% unrelated. This fraction of strong]\nelated nearest neighbors was lower than we had hoped, but much higher than expected by chanc\nFigure 6), and it definitely improved classification performance. Furthermore, it was obvious b\nnspection that in general, codes closer in the embedding were more semantically related than dista1\nodes, but interestingly, the distance to the nearest such neighbor showed the opposite relationshi\n\u2014 nearest neighbors that were very close were less likely to be semantically related than neare\neighbors that were far, and this trend is roughly linear across the full range of d (Figure|6). So th\nparser the points are in the embedded space, the more semantically related they are to their neare\neighbor, but the causal direction of that effect and the technical reason for it are beyond the sco\nf this initial work.\nFor this prediction problem, we settled on predicting the medications that occurred in the record\nduring the same time span as the billing codes used. Originally, we intended to predict only the\nmedications listed on the day of the reference point, but that turned out to greatly exacerbate the\nmissing medication problem. After trying medications that fell on the reference day only, the week\nprior to the reference day, and the six months prior, our best performance both subjectively and\nobjectively was achieved using the full time range of the input data.\nWhile the performance of the recurrent networks was quite good, we believe it could be improved\nby including additional input data, such as laboratory test results, demographics, and perhaps vital\n1585.9 Chronic kidney disease, unspecified\n585.3 Chronic kidney disease, Stage Il (moderate)\n\u00a9 585.6 End stage renal disease\n\n(\u00a9 585.4 Chronic kidney disease, Stage IV (severe)\n\n\\V45.11 Renal dialysis status\n\u00b0 (\u00a9 585.5 Chronic kidney disease, Stage V\n\u00a9 V45.1 Postsurgical renal dialysis status\n\n285.21 Anemia in chronic kidney disease\n\n\u00b0\ns 787.4 Heartburn\n\n721.00 Synovitis and tenosynovitis\nir. 909.24 Adjustment disorder with anxiety\nPee {\u00a9 831.00 Closed dislocation of shoulder\n\n724.3 Sciatica \u00a9 701.4 Keloid scar\n\n250.84 Diabetes with other specified manifestations, type |.\n[36201 Background abet retnopahy\n\n 260:40 Diabetes with renal manifestations, type Il or unspecified,\n\n\u2018@ ,250.80 Diabetes with other specified Manifestations, type lor unspecitied,\n\n250.50 Diabetes with ophthalmic manifestations, type Il or unspecified,\n\n250.42 Diabetes with renal manifestations, type Il or unspecified, uncontrol\n\n250.01 Diabetes without complication, type |\n\n\u2018\u00a9 250.62 Diabetes with neurological manifestations, type Il or unspecified,\nuncontrolled\n\n{\u00a9 250.60 Diabetes with neurological manifestations, type ll or unspecified |\n\u2018585.3 Chronic kidney disease, Stage Ill (moderate)\n\u00a9 585.6 End stage renal disease\n\n(\u00a9 585.4 Chronic kidney disease, Stage IV (severe)\n\n\\V45.11 Renal dialysis status\n\n\u00b0 (\u00a9 585.5 Chronic kidney disease, Stage V\n\u00a9 V45.1 Postsurgical renal dialysis status\n\n285.21 Anemia in chronic kidney disease\n\n\u00b0\ns 787.4 Heartburn\n\n727.00 Synovts and tenosynovitis\nir. 909.24 Adjustment disorder with anxiety\nPee {\u00a9 831.00 Closed dislocation of shoulder\n\n724.3 Sciatica \u00a9 701.4 Keloid scar\n\n\u2018250.81 Diabetes with other specified manifestations, type |...\n{362.01 Background diabetic retinopathy\n\u2018\n\n'250:40 Diabetes with renal manifestations, type Il or unspecified\n\u00b0 8 Diabetes with other specified manifestations, type li or unspecified\n\n}5\u2014 250.50 Diabetes with ophthalmic manifestations, type Il or unspecific\n250.42 Diabetes with renal manifestations, type Il or unspecified, unc\n250.01 Diabetes without complication, type |\n\n\u2018\u00a9 250.62 Diabetes with neurological manifestations, type Il or unspec\nuncontrolled\n\n{\u00a9 250.60 Diabetes with neurological manifestations, type Il or unspec\n\n\u00a9 357.2 Polyneuropathy in diabetes\nm = strongly related\n\nm = unrelated\n\nm = loosely related\n\n10\n\n20\n\n30\n\nnearest neighbor distance d\n\n40\nsigns. We also suspect that if we can devise a way to convert our medication data into reliably-\nordered sequences, we can more fully exploit the strengths of recurrent networks for medication\nprediction. We look forward to trying these and other variations in future work."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was funded by grants from the Edward Mallinckrodt, Jr. Foundation and the National\nInstitutes of Health R21LM011664 and RO1EB020666. Clinical data was provided by the Vanderbil\nSynthetic Derivative, which is supported by institutional funding and by the Vanderbilt CTSA gran\nULTRO00445.\nFigure 5: A t-SNE representation of our final embedding. The insets highlight two groups of codes\n(diabetes mellitus and kidney failure) that are strongly related clinically, and a third group that is\nnot. Codes are colored by whether their nearest neighbor in the embedding space (which may be\ndifferent from the nearest neighbor in this t-SNE space) is strongly related (blue), loosely related\n(orange), or unrelated (gray) from a clinical perspective.\nFigure 6: Semantic relatedness of nearest neighbors vs. the distance between them. Solid lines\nare the conditional probabilities P(m|d) for the three values of m, dashed line is the marginal\nprobability P(d) of nearest neighbor distances d. Surprisingly, nearest neighbors that are farther\naway (but still the nearest neighbor) are more strongly related than nearest neighbors that are closer\nin the embedding space. Shaded regions, colored to correspond to the three values of m, are the 95%\nCI for empirically estimated P(m) under random pairings, and represent the expected null result."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Lisa Bastarache and Joshua C. Denny. The use of ICD-9 codes in genetic association studies. Ir\nAMIA Annu Symp Proc, volume 2011, pp. 1738, 2011.\nJames Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. The Journa\nof Machine Learning Research, 13(1):281\u2014305, 2012.\nSelin Caglar, Philip L Henneman, Fidela S Blank, Howard A Smithline, and Elizabeth A Henneman\nEmergency department medication lists are not accurate. The Journal of emergency medicine, 40\n613-616, Jun 2011.\nKyunghyun Cho, Bart van Merrienboer, Caglar Giilgehre, Fethi Bougares, Holger Schwenk, and\nYoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical ma-\nchine translation. CoRR, abs/1406.1078, 2014.\nFrancois Chollet. Keras. https://github.com/fchollet/keras, 2015.\nJunyoung Chung, Caglar Giilgehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation o!\ngated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555, 2014.\nJoshua C. Denny, Marylyn D. Ritchie, Melissa A. Basford, Jill M. Pulley, Lisa Bastarache, Kristin\nBrown-Gentry, Deede Wang, Dan R. Masys, Dan M. Roden, and Dana C. Crawford. Phewas\ndemonstrating the feasibility of a phenome-wide scan to discover gene\u2014disease associations\nBioinformatics, 26(9):1205\u20141210, 2010.\nManuel Fernandez-Delgado, Eva Cernadas, Sen\u00e9n Barro, and Dinani Amorim. Do we need hun-\ndreds of classifiers to solve real world classification problems? Journal of Machine Learning\nResearch, 15:3133-3181, 2014.\nAlex Graves. Supervised Sequence Labelling with Recurrent Neural Networks. Springer, 2012\nAlex Graves, Abdel rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-\nrent neural networks. arXiv preprint, 1303.5778, 2013.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735-\n1780, November 1997.\nRafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurren\nnetwork architectures. Journal of Machine Learning Research, 2015.\nae ie\n\nof computerized medication histories. The American journal of managed care, 10:872-877, Noi\n2004.\n\nCaroline Keogh, Allen Kachalia, Karen Fiumara, Dorothy Goulart, Jonathan Coblyn, and Sonali |!\nDesai. Ambulatory medication reconciliation: Using a collaborative approach to process improve\nment at an academic medical center. Joint Commission journal on quality and patient safety, 42\n186-194. Apr 2016.\nEdward Choi, Andy Schuetz, Walter F. Stewart, and Jimeng Sun. Using recurrent neural network\nmodels for early detection of heart failure onset. J Am Med Inform Assoc, Aug 2016b.\nSimon de Lusignan and Chris van Weel. The use of routinely collected computer data for researcl\nin primary care: opportunities and challenges. Family practice, 23:253\u2014263. Apr 2006.\nThomas A. Lasko. Efficient inference of Gaussian process modulated renewal processes with ap-\nplication to medical event data. In Proceedings of the Thirtieth Conference on Uncertainty ir\nArtificial Intelligence (UAT), July 2014.\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representa-\ntions of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling\nZ. Ghahramani, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems\n26, pp. 3111-3119. Curran Associates, Inc., 2013.\nOE INI ERD III I II I NIN ODI DIA MILO IID IND ANIA NINE ONDA NID OM FB ERO RARIIE RIRIEMEERERE EG OEM RE De\n\n1620-1639, Oct 2005.\n\nF. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten-\nhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and\nE. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research,\n12:2825-2830, 2011.\n\nD. M. Roden, J. M. Pulley, M. A. Basford, G. R. Bernard, E. W. Clayton, J. R. Balser, and D. R.\nMasys. Development of a large-scale de-identified dna biobank to enable personalized medicine.\nClin Pharmacol Ther, 84(3):362-369, Sep 2008.\n\nD. G. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error\npropagation. In D. E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed Processing:\nExplorations in the Microstructure of Cognition, volume 1: Foundations, pp. 318 \u2014 362. MIT\nPress, 1986.\n\nCharles Safran, Meryl Bloomrosen, W Edward Hammond, Steven Labkoff, Suzanne Markel-Fox,\nPaul C. Tang, Don E. Detmer, and Expert Panel. Toward a national framework for the secondary\nuse of health data: an american medical informatics association white paper. J Am Med Inform\nAssoc, 14(1):1-9, 2007.\n\nKonstantinos Sechidis, Grigorios Tsoumakas, and Ioannis Vlahavas. On the stratification of multi-\nlabel data. In Proceedings of the 2011 European Conference on Machine Learning and Knowl-\nedge Discovery in Databases - Volume Part III, ECML PKDD\u201911, pp. 145-158, Berlin, Heidel-\nHua Xu, Shane P Stenner, Son Doan, Kevin B Johnson, Lemuel R Waitman, and Joshua C Denny\nMedex: a medication information extraction system for clinical narratives. J Am Med Inforn\nAssoc, 17(1):19-24, 2010.\nIn Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances\nin Neural Information Processing Systems 27, pp. 3104-3112. Curran Associates, Inc., 2014.\n\nGrigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. Mining multi-label data. In Oded\nMaimon and Lior Rokach (eds.), Data Mining and Knowledge Discovery Handbook, pp. 667\u2014\n685. Springer US, Boston, MA, 2010.\n\nG. P. Velo and P. Minuz. Medication errors: prescribing faults and prescription errors. Br J Clin\nPharmacol, 67(6):624\u2014628, Jun 2009.\n\nOriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image\ncaption generator. CoRR, abs/1411.4555, 2014.\n\nW. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, and G. Zweig. Achieving\nhuman parity in conversational speech recognition, 2016.\n\nHua Xu, Shane P Stenner, Son Doan, Kevin B Johnson, Lemuel R Waitman, and Joshua C Denny.\nMatthew D. Zeiler. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701, 2012.\nM. L. Zhang and Z. H. Zhou. A review on multi-label learning algorithms. [EEE Transactions or\nKnowledge and Data Engineering, 26(8):1819-1837, Aug 2014."}, {"section_index": "11", "section_name": "APPENDIX A.", "section_text": "This appendix lists the optimized parameters for the different models. Except where noted, param-\neters were optimized under random search.\nRecurrent Neural Network Models: (parameters marked with an asterisk were optimized with\nhuman-guided search.)\nParameter Model\n\nGRU LSTM\nDropout for input gates 0.1 0.25\nDropout for recurrent connections 0.75 0.75\nLI applied to the input weights matrices 0 0)\nLI applied to the recurrent weights matrices 0 0)\nL2 applied to the input weights matrices 0.0001 0.0001\nL2 applied to the recurrent weights matrices 0.0001 0.001\nL2 applied to the output layer\u2019s weights matrices 0.0001 0.001\nDropout before the output layer 0.5 0.5\n*Number of recurrent layers 3 3\n*Number of nodes in recurrent units 400 400\nFeed Forward Neural Network Model:\nRandom Forest Model (binary input):"}, {"section_index": "12", "section_name": "APPENDIX B.", "section_text": "This appendix lists the top level classes for International Statistical Classification of Diseases an\nRelated Health Problems, Ninth Revision (ICD-9) and Anatomical Chemical Classification Systen\n(ATC).\nI IE EO DIN III DNA ILD\n140-239 Neoplasms\n240-279 Endocrine, nutritional and metabolic diseases, and immunity disorders\n280-289 Diseases of the blood and blood-forming organs\n290-319 Mental disorders\n320-359 Diseases of the nervous system\n360-389 Diseases of the sense organs\n390-459 Diseases of the circulatory system\n460-519 Diseases of the respiratory system\n520-579 Diseases of the digestive system\n580-629 Diseases of the genitourinary system\n630-679 Complications of pregnancy, childbirth, and the puerperium\n680-709 Diseases of the skin and subcutaneous tissue\n710-739 Diseases of the musculoskeletal system and connective tissue\n740-759 Congenital anomalies\n760-779 Certain conditions originating in the perinatal period\n780-799 Symptoms, signs, and ill-defined conditions\n800-999 Injury and poisoning\nVO1-V91 Supplementary - factors influencing health status and contact with health s\u00a2\n)00-E999 Supplementary - external causes of injury and poisoning\nTop level groups ATC codes and their corresponding colors used in Figure[4]and Appendix ("}, {"section_index": "13", "section_name": "APPENDIX C.", "section_text": "This appendix presents results from three illustrative cases from the dozen cases randomly selectec\nfor individual evaluation.\nfora\n\n203.00\n273.1\n285.9\n\n276.50\n\n733.00\n\n203.00\n\n203.00\n\n203.01\n273.1\n273.1\n279.3\n\n203.00\n781.2\n\n203.00\n401.9\n\nV12.54\n\n794.31\n\n786.09\n273.1\n\n203.00\n\nV58.69\n794.31\n203.00\n\nV42.82\n\n203.01\n\n38.97\nV42.82\nV58.81\n\n203.00\n\nV42.82\n203.01\n203.00\n\nV42.82\n203.00\n\nV42.82\n\nMultiple myeloma, without mention of having achieved remission\nMonoclonal paraproteinemia\n\nAnemia, unspecified\n\nVolume depletion, unspecified\n\nOsteoporosis, unspecified\n\nMultiple myeloma, without mention of having achieved remission\nMultiple myeloma, without mention of having achieved remission\nMultiple myeloma, in remission\n\nMonoclonal paraproteinemia\n\nMonoclonal paraproteinemia\n\nUnspecified immunity deficiency\n\nMultiple myeloma, without mention of having achieved remission\nAbnormality of gait\n\nMultiple myeloma, without mention of having achieved remission\nUnspecified essential hypertension\n\nPersonal history of transient ischemic attack (TIA), and cerebral infarction without residual deficits\n\nNonspecific abnormal electrocardiogram [ECG] [EKG]\nOther respiratory abnormalities\n\nMonoclonal paraproteinemia\n\nMultiple myeloma, without mention of having achieved remission\nLong-term (current) use of other medications\n\nNonspecific abnormal electrocardiogram [ECG] [EKG]\n\nMultiple myeloma, without mention of having achieved remission\nPeripheral stem cells replaced by transplant\n\nMultiple myeloma, in remission\n\nCentral venous catheter placement with guidance\n\nPeripheral stem cells replaced by transplant\n\nFitting and adjustment of vascular catheter\n\nMultiple myeloma, without mention of having achieved remission\nPeripheral stem cells replaced by transplant\n\nMultiple myeloma, in remission\n\nMultiple myeloma, without mention of having achieved remission\nPeripheral stem cells replaced by transplant\n\nMultiple myeloma, without mention of having achieved remission\nPeripheral stem cells replaced by transplant\n\ncosa, I 10a |\n\nROLA\n\n4.8 months ago\n4.8 months ago\n4.8 months ago\n4.8 months ago\n4.8 months ago\n4.8 months ago\n2.9 months ago\n2.9 months ago\n2.9 months ago\n1.6 months ago\n1.6 months ago\n1.6 months ago\n3.7 weeks ago\n3.7 weeks ago\n3.7 weeks ago\n3.7 weeks ago\n3.7 weeks ago\n3.7 weeks ago\n3.7 weeks ago\n3.6 weeks ago\n3.6 weeks ago\n3.4 weeks ago\n4 days ago\n\n4 days ago\n\n3 days ago\n\n3 days ago\n\n3 days ago\n\n3 days ago\n\n3 days ago\n\n2 days ago\n\n2 days ago\n\n1 day ago\n\n1 day ago\n\nnow\n\nnow\n\nso1c sozp 5\u00b038\n\naa\n19)\n\nfora\n\n08\n\n06\n\nol\n\n0.2|\n\ncosa\na0sc |\narc\n[loos\ncosq\n\nco7A\n\nHoZA\n| J05A,\n\njo1D\n\njoim\n\nLoan\n\nNo2A\n\nNoze\n\nsoy\nMedication predictions for a complicated patient. Each vertical bar represents the prediction for\na single medication class, with the height of the bar representing the confidence of the prediction.\nBlack labels above arrows indicate ATC therapeutic classes for medications the patient was actually\ntaking. Colors and letters below the axis indicate high-level therapeutic class groups.\nPredicted vs. actual medication classes for the patient in Case 1. The four-character sequence in\nthe first and fourth columns is the ATC code for the medication therapeutic class, and an asterisk in\nthe first column indicates that the predicted medication is in the actual medication list. Probabilities\nlisted are the model predictions for the listed therapeutic class. In the predicted medications column,\nall predictions with probability at least 0.2 are listed.\noP eee\n\naaa\n\neee\n\nsee\n\nCOSA*\n\nA04A\nROLA*\n\nJOSA*\nAOIA*\nNO2A*\nBOSC*\nA12C*\nBOSX*\nLO4A*\nNOSA\n\nNO2B*\nSOIA*\nLO3A.\n\nA02B\n\nJo1D*\nC03C*\nBO1A\nV03A\nROGA\nA06A\nJoIM*\nNOSB\nDO4A\nCO7A*\nLOIX\nROSC\n\nNO3A*\n\nCorticosteroids\nAntiinflammatory agents and antiinfect\nnation\n\nCorticosteroids\n\nAlkylating agents\n\nCorticosteroids, other combinations\nCorticosteroids for systemic use, plain\nCorticosteroids, plain\nAntiinflammatory agents\n\nAnti-aene preparations for topical use\n\nAgents for treatment of hemorrhoids and anal fissures\nfor topical use\n\nAntiemetics and antinauseants\n\nDecongestants and other nasal preparations for topi-\ncal use\n\nDirect acting antivirals\n\nStomatological preparations\n\nOpioids\n\nInrigating solutions\n\nOther mineral supplements\n\nLy. solution additives\n\nImmunosuppressants\n\nAntipsychotics\n\nOther analgesics and antipyretics\n\nAntiinfectives\n\nImmunostimulants\n\nDrugs for peptic ulcer and gastro-oesophageal reflux\ndisease\n\nOther beta-lactam antibacterials\n\nHigh-ceiling diuretics\n\nAntithrombotic agents\n\nAll other therapeutic products\n\nAntihistamines for systemic use\n\nDrugs for constipation\n\nQuinolone antibacterials\n\nAnxiolytics\n\nAntipruritics, incl. antihistamines, anesthetics, etc.\nBeta blocking agents\n\nOther antineoplastic agents\n\nExpectorants, excl. combinations with cough sup-\npressants\n\nAntiepilepties\n\n97.01%\n95.54%\n\n95.54%\n94.00%\n93.37%\n91.06%\n90.83%\n90.79%\n88.56%\n\n88.52%\n\n87.95%\n87.02%\n\n86.8%\n86.11%\n84.86%\n82.56%\n79.50%\nTA 84%\n68.76%\n58.64%\n57.24%\n54.59%\n45.96%\n44.56%\n\n43.40%\n39.88%\n37.80%\n34.18%\n31.78%\n31.57%\n29.78%\n29.42%\n27.62%\n27.08%\n24.72%\n20.43%\n\n20.00%\n\nS03B\nS01Cc\n\n$02B\nDO7X\nHO2A\nDOTA\nSOIB\nDI0A\nCOSA\n\nROIA\n\nJO5A.\nAOIA\n\nNO2A\nBOSC\nA12C\nBOSX\nLO4A,\nNO2B\nSO1A\n\nJoID\n\nC03C\nJOIM\nCOTA\n\nNO3A\nJOIX\nM03B\n\nCorticosteroids\nAntiinflammatory agents and antiinfectives in combi-\nnation\n\nCorticosteroids\n\nCorticosteroids, other combinations\n\nCorticosteroids for systemic use, plain\nCorticosteroids, plain\n\nAntiinflammatory agents\n\nAnti-acne preparations for topical use\n\nAgents for treatment of hemorthoids and anal fissures\nfor topical use\n\nDecongestants and other nasal preparations for topi-\ncal use\n\nDirect acting antivirals\n\nStomatological preparations\n\nOpioids\nInrigating solutions\n\nOther mineral supplements\n\nLV. solution additives\nImmunosuppressants\n\nOther analgesics and antipyretics\nAntiinfectives\n\nOther beta-lactam antibacterials\nHigh-ceiling diuretics\nQuinolone antibacterials\n\nBeta blocking agents\n\nAntiepileptics\nOther antibacterials\nMuscle relaxants, centrally acting agents\n\n97.01%\n95.54%\n\n95.54%\n93.37%\n91.06%\n90.83%\n90.79%\n88.56%\n88.52%\n\n87.02%\n\n86.83%\n86.11%\n\n84.86%\n82.56%\n79.50%\nT4 84%\n68.76%\n57.24%\n54.59%\n43.40%\n39.88%\n18%\n08%\n\nRE\n\n20.00%\n5.88%\n5.09%\nPredicted vs. actual medication classes for Case 2. Table structure as in case 1.\nTop predictions Prob. True labels Prob.\nMOSB Drugs affecting bone structure and mineralization 88.18% ALIC Vitamin a and d, incl. combinations of the two 39.42%\nH03A_ Thyroid preparations 84.82% NOGA Antidepressants 20.88%\nHOSA Parathyroid hormones and analogues 66.33% CI0A Lipid modifying agents, plain 17.05%\nA1IC* Vitamin a and d, incl. combinations of the two 39.42% NO3A _\u2014_Antiepileptics 15.61%\nNO2B Other analgesics and antipyretics 37.58% CO9C Angiotensin ii antagonists, plain 10.38%\nAQIA \u2014_Stomatological preparations 23.05% LO2B Hormone antagonists and related agents 4.22%\nA12A Calcium 21.59%\nNO6A* Antidepressants 20.88%\nCO7A \u2014_Beta blocking agents 20.81%\nMedication predictions for a simpler patient. Note that the high-prediction medications are clinically\nreasonable given the billing codes in the sequence. Figure representation as in case 1.\nPredicted vs. actual medication classes for Case 3. Table structure as in case 1.\nMedication predictions for a patient with only one ICD-9 code, repeated many times over five years\nhe medications listed under true labels are not indicated for paralysis agitans (Parkinson\u2019s disease)\nout the patient was surely taking them for reasons not documented in the ICD-9 sequence. Th\u00ab\nmodel predicted mostly reasonable medications for a patient with Parkinson\u2019s disease, especially\nDopaminergic agents, which is the primary treatment for the disease. Figure representation as ir\ncase 1. above.\n\u2018Top predictions Prob. True labels Prob.\nNO4B Dopaminergic agents 97.66% C10A Lipid modifying agents, plain 13.90%\nNO3A \u2014_Antiepileptics 34.01% COVA Ace inhibitors, plain 9.21%\nNO2B Other analgesics and antipyretics 32.81% COLE Other cardiac preparations 5.56%\nNO6A Antidepressants 26.10% CO2C \u2014_Antiadrenergic agents, peripherally acting 0.72%\nNO2A Opioids 20.33% G03B Androgens 0.32%\nA14A Anabolic steroids 0.08%\n19\n0.8\n05\n0."}]
rJ8uNptgl
[{"section_index": "0", "section_name": "TOWARDS THE LIMIT OF NETWORK QUANTIZATION", "section_text": "Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee\nee\n\nfyoojin.c, mostafa. e, jungwon2. lee}@samsung. com"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Network quantization is one of network compression techniques to reduce the re\ndundancy of deep neural networks. It reduces the number of distinct network pa\nrameter values by quantization in order to save the storage for them. In this paper\nwe design network quantization schemes that minimize the performance loss duc\nto quantization given a compression ratio constraint. We analyze the quantitativ.\nrelation of quantization errors to the neural network loss function and identify tha\nthe Hessian-weighted distortion measure is locally the right objective function fo\nthe optimization of network quantization. As a result, Hessian-weighted k-mean:\nclustering is proposed for clustering network parameters to quantize. When opti\nmal variable-length binary codes, e.g., Huffman codes, are employed for furthe\ncompression, we derive that the network quantization problem can be related t\nthe entropy-constrained scalar quantization (ECSQ) problem in information the\nory and consequently propose two solutions of ECSQ for network quantization\ni.e., uniform quantization and an iterative solution similar to Lloyd\u2019s algorithm\nFinally, using the simple uniform quantization followed by Huffman coding, we\nshow from our experiments that the compression ratios of 51.25, 22.17 and 40.6:\nare achievable for LeNet, 32-layer ResNet and AlexNet, respectively."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks have emerged to be the state-of-the-art in the field of machine learning fo:\nimage classification, object detection, speech recognition, natural language processing, and machine\ntranslation (LeCun et all [2015). The substantial progress of neural networks however comes with\nhigh cost of computations and hardware resources resulting from a large number of parameters. Fo!\nexample, came up with a deep convolutional neural network consisting ot\n61 million parameters and won the ImageNet competition in 2012. It is followed by deeper neura\nnetworks with even larger numbers of parameters, e.g.,[Simonyan & Zisserman| D014).\nThe large sizes of deep neural networks make it difficult to deploy them on resource-limited devices.\ne.g., mobile or portable devices, and network compression is of great interest in recent years to\nreduce computational cost and memory requirements for deep neural networks. Our interest in this\npaper is mainly on curtailing the size of the storage (memory) for network parameters (weights and\nbiases). In particular, we focus on the network size compression by reducing the number of distinct\nnetwork parameters by quantization.\nBesides network quantization, network pruning has been studied for network compression to remov\nredundant parameters permanently from neural networks (Mozer & Smolensky,|1989;{LeCun et al\n1989; [Hassibi & Storkl, {1993} [Han et al.) [2015b; [Lebedev & Lempitsky, |2016; Lec petal\nMatrix/tensor factorization and low-rank approximation have been investigated as well to find mor\nefficient representations of neural networks with a smaller number of parameters and consequent!\n\nto save computations (Sainath et al], 2013; [Xue et all, 2013; |Jaderberg et all, Lebedev et al\n2014; 2015; 20151 2015 20151 2015\n\nMoreover, similar to network quantization, low-precision network implementation has been exam\nined in|Vanhoucke et al] (201 1)-{Courbariaux etal) 2014); (2015);|Gupta et al]\n(2015a). Some extremes of low-precision neural networks consisting of binary or ternar\nparameters canbe found in{Courbariaux etal] 2015): Lin etal) Q013b): Rastegarietal/ 2016). W\nnote that these are different types of network compression techniques, which can be employed o\ntop of each other.\nThe most related work to our investigation in this paper can be found in|Gong et al 2014);{Han et al\n(2015a), where a conventional quantization method using k-means clustering is employed for net\nwork quantization. This conventional approach however is proposed with little consideration for the\nimpact of quantization errors on the neural network performance loss and no effort to optimize the\nquantization procedure for a given compression ratio constraint. In this paper, we reveal the subop-\ntimality of this conventional method and newly design quantization schemes for neural networks. Ir\nparticular, we formulate an optimization problem to minimize the network performance loss due tc\nquantization given a compression ratio constraint and find efficient quantization methods for neura\nnetworks.\nThe main contribution of the paper can be summarized as follow:\nThe rest of the paper is organized as follows. In Section[2] we define the network quantization prob\nlem and review the conventional quantization method using k-means clustering. Section B]discusse:\nHessian-weighted network quantization. Our entropy-constrained network quantization scheme:\nfollow in Section[4] Finally, experiment results and conclusion can be found in Section[S]and Sec\ntion[6] respectively.\nWe consider a neural network that is already trained, pruned if employed and fine-tuned before quan.\ntization. If no network pruning is employed, all parameters in a network are subject to quantization\nFor pruned networks, our focus is on quantization of unpruned parameters.\nThe goal of network quantization is to quantize (unpruned) network parameters in order to reduce th\nsize of the storage for them while minimizing the performance degradation due to quantization. Fo\nnetwork quantization, network parameters are grouped into clusters. Parameters in the same cluste\nshare their quantized value, which is the representative value (i.e., cluster center) of the cluster the:\nbelong to. After quantization, lossless binary coding follows to encode quantized parameters int\nbinary codewords to store instead of actual parameter values. Either fixed-length binary coding o\nvariable-length binary coding, e.g., Huffman coding, can be employed to this end.\nSuppose that we have total N parameters in a neural network. Before quantization, each paramete1\nis assumed to be of b bits. For quantization, we partition the network parameters into k clusters.\nLet C; be the set of network parameters in cluster 7 and let b; be the number of bits of the codeworc\nassigned to the network parameters in cluster 7 for 1 < i < k. Fora lookup table to decode quantized\nIt is derived that the performance loss due to quantization in neural networks can be quan:\ntified approximately by the Hessian-weighted distortion measure. Then, Hessian-weightec\nk-means clustering is proposed for network quantization to minimize the performance loss\n\nIt is identified that the optimization problem for network quantization provided a compres\nsion ratio constraint can be reduced to an entropy-constrained scalar quantization (ECSQ\nproblem when optimal variable-length binary coding is employed after quantization. Tw\nefficient heuristic solutions for ECSQ are proposed for network quantization, i.e., uniforn\nquantization and an iterative solution similar to Lloyd\u2019s algorithm.\n\nAs an alternative of Hessian, it is proposed to utilize some function (e.g., square root) 0:\nthe second moment estimates of gradients when the Adam (Kingma & Ba,|2014) stochastic\ngradient decent (SGD) optimizer is used in training. The advantage of using this alterna\n\ntive is that it is computed while training and can be obtained at the end of training at nc\nadditional cost.\n\nIt is shown how the proposed network quantization schemes can be applied for quantizing\nnetwork parameters of all layers together at once, rather than layer-by-layer network quan:\ntization in{Gong et al] 014; [Hare all (2015a). This follows from our investigation tha\nHessian-weighting can handle the different impact of quantization errors properly not only\n\nwithin layers but also across layers. Moreover, quantizing network parameters of all layer:\ntogether, one can even avoid layer-by-layer compression rate optimization.\nvalues from their binary encoded codewords, we store k binary codewords (b; bits for 1 < i < k)\nand corresponding quantized values (b bits for each). The compression ratio is then given by\nNb\n\nCompression ratio = \u2014\u2014~\u2014______\nDies (lCil] + 1d; + kb\nObserve in (1) that the compression ratio depends not only on the number of clusters but also on the\nsizes of the clusters and the lengths of the binary codewords assigned to them, in particular, when\na variable-length code is used for encoding quantized values. For fixed-length codes, however, all\ncodewords are of the same length, i.e., b; = [logy k] for all 1 < i < k, and thus the compression\nratio is reduced to only a function of the number of clusters, i.e., k, assuming that V and b are given.\n1\n; Where c; = w.\nGl\n\nweld;\n\nk\nargmin Ss Ss jw \u2014\n\nC1CaCn YAS,\nWe observe two issues with employing k-means clustering for network quantization."}, {"section_index": "3", "section_name": "3.1 NETWORK MODEL", "section_text": "We consider a general non-linear neural network that yields output y = f(x; w) from input x, where\nwe=[wi--: wn] is the vector consisting of all trainable network parameters in the network; N\nis the total number of trainable parameters in the network. A loss function loss(y, y) is defined as\nthe objective function that we aim to minimize in average, where y = y(x) is the expected (ground-\ntruth) output for input x. Cross entropy or mean square error are typical examples of a loss function.\nGiven a training data set Vrain, we Optimize network parameters by solving the following problem,\ne.\u00b0.. approximatelv bv using a stochastic sradient descent (SGD) method with mini-batches:\nW =argmin L(Xrain;w), where L(\u00a5V;w) Ss loss(f (x; w), \u00a5(x)).\nw - xeX\nProvided network parameters {w;}/_, to quantize, k-means clustering partitions them into k dis-\njoint sets (clusters), denoted by C),C2,...,C;, while minimizing the mean square quantization error\n(MSQEB) as follows:\ne First, although k-means clustering minimizes the MSQE, it does not imply that k-means\nclustering minimizes the performance loss due to quantization as well in neural networks.\nK-means clustering treats quantization errors from all network parameters with equal im-\nportance. However, quantization errors from some network parameters may degrade the\nperformance more significantly that the others. Thus, for minimizing the loss due to quan-\ntization in neural networks, one needs to take this dissimilarity into account.\n\ne Second, k-means clustering does not consider any compression ratio constraint. It simply\nminimizes its distortion measure for a given number of clusters, i.e., for k clusters. This is\nhowever suboptimal when variable-length coding follows since the compression ratio de-\npends not only on the number of clusters but also on the sizes of the clusters and assigned\ncodeword lengths to them, which are determined by the binary coding scheme employed af-\nter clustering. Therefore, for the optimization of network quantization given a compression\nratio constraint, one need to take the impact of binary coding into account, i.e., we need to\nsolve the quantization problem under the actual compression ratio constraint imposed by\nthe specific binary coding scheme employed after clustering.\n[n this section, we analyze the impact of quantization errors on the neural network loss function\nand derive that the Hessian-weighted distortion measure is a relevant objective function for network\nquantization in order to minimize the quantization loss locally. Moreover, from this analysis, we pro-\npose Hessian-weighted k-means clustering for network quantization to minimize the performance\nloss due to quantization in neural networks.\nThe average loss function L(Y; w) can be expanded by Taylor series with respect to w as follows\n1\n5L(\u00a5;w) = g(w)? dw + gow! H(w)dw + O(||dw||\u00b0).\nthe square matrix H(w) consisting of second-order partial derivatives is called as Hessian matrix\nor Hessian. Assume that the loss function has reached to one of its local minima, at w = w, after\ntraining. At local minima, gradients are all zero, i.e., we have g(w) = 0, and thus the first term in\nthe right-hand side of G) can be neglected at w = w. The third term in the right-hand side of\nis also ignored under the assumption that the average loss function is approximately quadratic at the\nlocal minimum w = w. Finally, for simplicity, we approximate the Hessian matrix as a diagonal\nmatrix by setting its off-diagonal terms to be zero. Then. it follows from (3) that\nwhere w; is a quantized value of 1;. Finally, combining (4) and (5), we derive that the local impact\nof quantization on the average loss function at w = Ww can be quantified approximately as follows:\nAt a local minimum, the diagonal elements of Hessian, i.e., h;;(w)\u2019s, are all non-negative and thus\nthe summation in (6) is always additive, implying that the average loss function either increases or\nstays the same. Therefore, the performance degradation due to quantization of a neural network can\nbe measured approximately by the Hessian-weighted distortion as shown in (6). Further discussion\non the Hessian-weighted distortion measure can be found in Appendix\nFor notational simplicity, we use w; = wW; and hi; = hi;(w) from now on. The optimal clustering\nthat minimizes the Hessian-weighted distortion measure is given by\n2\n\n. > BEC; hiywi\nargmin his |wi \u2014 , where cj = a Sa va\nwhere hj;(w) is the second-order partial derivative of the average loss function with respect to w;\nevaluated at w = W, which is the i-th diagonal element of the Hessian matrix H(w).\nNow, we connect (4) with the problem of network quantization by treating dw; as the quantization\nerror of network parameter w; at its local optimum w; = wW;. L.e..\nh,\n\nii (\n\n=) W) |; \u2014\nwl\".\nWe call this as Hessian-weighted k-means clustering. Observe in (7) that we give a larger penalty for\na network parameter in defining the distortion measure for clustering when its second-order partial\nderivative is larger, in order to avoid a large deviation from its original value, since the impact on\nthe loss function due to quantization is expected to be larger for that parameter.\nHessian-weighted k-means clustering is locally optimal in minimizing the quantization loss when\nfixed-length binary coding follows, where the compression ratio solely depends on the number of\nclusters as shown in Section[2.1] Similar to the conventional k-means clustering, solving this op-\ntimization is not easy, but Lloyd\u2019s algorithm is still applicable as an efficient heuristic solution for\nthis problem if Hessian-weighted means are used as cluster centers instead of non-weighted regular\nmeans.\nFor obtaining Hessian, one needs to evaluate the second-order partial derivative of the average los:\nfunction with respect to each of network parameters, i.e., we need to calculate\na OPL(X;w\n\n1\nFy ie 7D loss(f (x; w), \u00a5(x))\n\nw=w w=W"}, {"section_index": "4", "section_name": "3.5 ALTERNATIVE OF HESSIAN", "section_text": "Although there is an efficient way to obtain the diagonal of Hessian as discussed in the previous sub:\nsection, Hessian computation is not free. In order to avoid this additional Hessian computation, we\npropose to use an alternative metric instead of Hessian. In particular, we consider neural network:\n\ntrained with the Adam SGD optimizer (Kingma & Ba,|2014) and propose to use some function (e.g.\n\nsquare root) of the second moment estimates of gradients as an alternative of Hessian.\nThe Adam algorithm computes adaptive learning rates for individual network parameters from the\nfirst and second moment estimates of gradients. We compare the Adam method to Newton\u2019s op-\ntimization method using Hessian and notice that the second moment estimates of gradients in the\nAdam method act like the Hessian in Newton\u2019s method. This observation leads us to use some func-\ntion (e.g., square root) of the second moment estimates of gradients as an alternative of Hessian.\nThe advantage of using the second moment estimates from the Adam method is that they are com-\nputed while training and we can obtain them at the end of training at no additional cost. It makes\nHessian-weighting more feasible for deep neural networks, which have millions of parameters.\nWe note that similar quantities can be found and used for other SGD optimization methods using\n\nadaptive learning rates, e.g., AdaGrad (Duchi et all 2011), Adadelta (Zeile: 2012) and RMSProp\n(Tieleman & Hinton} |2012)."}, {"section_index": "5", "section_name": "3.6 QUANTIZATION OF ALL LAYERS", "section_text": "We propose quantizing the network parameters of all layers in a neural network together at onc\u00ab\nby taking Hessian-weight into account. Layer-by-layer quantization was examined in the previou:\nwork (Gong et all, DOL Han et al.,/2015a). However, e.g., in[Han et al] (2015), a larger number o:\nbits (a larger number of clusters) are assigned to convolutional layers than fully-connected layers\nwhich implies that they heuristically treat convolutional layers more importantly. This follows fron\nthe fact that the impact of quantization errors on the performance varies significantly across layers\nsome layers, e.g., convolutional layers, may be more important than the others. This concern i:\nexactly what we can address by Hessian-weighting.\nHessian-weighting properly handles the different impact of quantization errors not only within layer:\nbut also across layers and thus it can be employed for quantizing all layers of a network together\nThe impact of quantization errors may vary more substantially across layers than within layers\nThus, Hessian-weighting may show more benefit in deeper neural networks. We note that Hessian-\nweighting can still provide gain even for layer-by-layer quantization since it can address the differen\nimpact of the quantization errors of network parameters within each layer as well.\nRecent neural networks are getting deeper, e.g., see |Szegedy et al (2015alb);\nsuch deep neural networks, quantizing network parameters of all layers together is even more advan:\ntageous since we can avoid layer-by-layer compression rate optimization. Optimizing compressior\nRecall that we are interested in only the diagonal elements of Hessian. An efficient way of computing\nthe diagonal of Hessian is presented inlLe Cui (1987);[Becker & Le Cun] ) and it is based on\nthe back propagation method that is similar to the back propagation algorithm used for computing\nfirst-order partial derivatives (gradients). That is, computing the diagonal of Hessian is of the same\norder of complexity as computing gradients.\nHessian computation and our network quantization are performed after completing network training.\nFor the data set XY used to compute Hessian in (8), we can either reuse a training data set or use some\nother data set, e.g., validation data set. We observed from our experiments that even using a small\nsubset of the training or validation data set is sufficient to yield good approximation of Hessian for\nnetwork quantization."}, {"section_index": "6", "section_name": "4.1 ENTROPY CODING", "section_text": "After quantizing network parameters by clustering, lossless data compression by variable-length bi-\nnary coding can be followed for compressing quantized values. There is a set of optimal codes that\nachieve the minimum average codeword length for a given source. Entropy is the theoretical limit of\nthe average codeword length per symbol that we can achieve by lossless data compression, proved\nby Shannon (see, ea. (Cover & Thomas Section 5.3)). It is known that optimal codes achieve\nthis limit with some overhead less than | bit when only integer-length codewords are allowed. So\noptimal coding is also called as entropy coding. Huffman coding is one of entropy coding schemes\nG01\n\ncommonly used when the source distribution is provided (see, e.g.,|Cover & Thomas Sec-\ntion 5.6)), or can be estimated.\nConsidering a compression ratio constraint in network quantization, we need to solve the clustering\nproblem in (2) or (7) under the compression ratio constraint given by\nb _ k\nCompression ratio = \u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014 > C,_ where b = Wo |Ci|bi,\n+ (Sie bi + kb) /N =a\nwhich follows from U1). This optimization problem is too complex to solve for any arbitrary variable-\nlength binary code since the average codeword length b can be arbitrary. However, we identify that\nit can be simplified if optimal codes, e.g., Huffman codes, are assumed to be used. In particular,\noptimal coding closely achieves the lower limit of the average source code length, i.e., entropy, and\nthen we approximately have\na\n\nk\nH= \u2014 opi logy pi,\ni=l\nk\nH=\u2014)_ pi logs pi < R,\ni=l\nwhere R = b/C. In summary, assuming that optimal coding is employed after clustering, one can\napproximately replace a compression ratio constraint with an entropy constraint for the clustering\noutput. The network quantization problem is then translated into a quantization problem with an en-\ntropy constraint, which is called as entropy-constrained scalar quantization (ECSQ) in information\ntheory. Two efficient heuristic solutions for ECSQ are proposed for network quantization in the fol-\nlowing subsections, i.e., uniform quantization and an iterative solution similar to Lloyd\u2019s algorithm\nfor k-means clustering.\nratios jointly across all individual layers (to maximize the overall compression ratio for a network)\nrequires exponential time complexity with respect to the number of layers. This is because the total\nnumber of possible combinations of compression ratios for individual layers increases exponentially\nas the number of layers increases.\nIn this section, we investigate how to solve the network quantization problem under a constraint on\nhe compression ratio. In designing network quantization schemes, we not only want to minimize\nhe performance loss but also want to maximize the compression ratio. In SectionB] we explored\n10w to quantify and minimize the loss due to quantization. In this section, we investigate how to\nake the compression ratio into account properly in the optimization of network quantization.\nwhere H is the entropy of the quantized network parameters after clustering (i.e., source), given\nthat p; = |C;|/N is the ratio of the number of network parameters in cluster C; to the number of all\nnetwork parameters (i.e., source distribution). Moreover, assuming that N >> k, we have\nIt is shown in|Gish & Pierce that the uniform quantizer is asymptotically optimal in mini-\nmizing the mean square quantization error for any random source with a reasonably smooth density\nfunction as the resolution becomes infinite, i.e., as the number of clusters k \u2014 oo. This asymptotic\nresult leads us to come up with a very simple but efficient network quantization scheme as follows:\nNote that one can use Hessian-weighted mean instead of non-weighted mean in computing clus-\nter centers in the second step above in order to take the benefit of Hessian-weighting. A perfor-\nmance comparison of uniform quantization with non-weighted mean and uniform quantization with\nHessian-weighted mean can be found in Appendix(\nAlthough uniform quantization is a straightforward method, it has never been shown before in th\nliterature that it is actually one of the most efficient quantization schemes for neural networks whet\noptimal variable-length coding, e.g., Huffman coding, follows. We note that uniform quantization i\nnot always good; it is inefficient for fixed-length coding, which is also first shown in this paper."}, {"section_index": "7", "section_name": "4.4 ITERATIVE ALGORITHM TO SOLVE ECSQ", "section_text": "Another scheme proposed to solve the ECSQ problem for network quantization is an iterative algo-\nrithm, which is similar to Lloyd\u2019s algorithm for k-means clustering. Although this iterative solution\nis more complicated than the uniform quantization in Section 4.3] it finds a local optimum for a\ngiven discrete source. An iterative algorithm to solve the general ECSQ problem is provided in\n(1989). We derive a similar iterative algorithm to solve the ECSQ problem for network\nquantization. The main difference from the method in{Chou et al] is that we minimize the\nHessian-weighted distortion measure instead of the non-weighted regular distortion measure for op-\ntimal quantization. The detailed algorithm and further discussion can be found in Appendix/A .3]\nThis section presents our experiment results for the proposed network quantization schemes in three\nexemplary convolutional neural networks: (a) LeNet (LeCun et al.,[1998) for the MNIST data set\n(b) ResNet 2015) for the CIFAR-10 data set, and (c) AlexNet (Krizhevsky et al.\n\nfor the ImageNet ILSVRC-2012 data set. Our experiments can be summarized as follows:"}, {"section_index": "8", "section_name": "5.1 EXPERIMENT MODELS", "section_text": "First, we evaluate our network quantization schemes for the MNIST data set with a simplified ver-\nsion of LeNet5 (LeCun et al||1998), consisting of two convolutional layers and two fully-connectec\n1. We first set uniformly spaced thresholds and divide network parameters into clusters.\n\n2. After determining clusters, their quantized values (cluster centers) are obtained by takins\nthe mean of network parameters in each cluster.\nWe employ the proposed network quantization methods to quantize all of network param.\neters in a network together at once, as discussed in Section\n\nWe evaluate the performance of the proposed network quantization methods with and with.\nout network pruning. For a pruned model, we need to store not only the values of unprunec\nparameters but also their respective indexes (locations) in the original model. For the inde\u00bb\ninformation, we compute index differences between unpruned network parameters in the\noriginal model and further compress them by Huffman coding as inHanct all (2015a).\n\nFor Hessian computation, 50,000 samples of the training set are reused. We also evaluate\nthe performance when Hessian is computed with 1,000 samples only.\n\nFinally, we evaluate the performance of our network quantization schemes using Hessiar\nwhen its alternative is used instead, as discussed in Section[3.5] To this end, we retrain the\nconsidered neural networks with the Adam SGD optimizer and obtain the second momen\nestimates of gradients at the end of training. Then, we use the square roots of the seconc\n\nmamant actimateac inctaad af Waccian and awvalinatea the narfarmanra\nAccuracy (%)\n\nAccuracy (%)\n\n100 100\n90 90\n80 80\n70 ~ 70 H\n60 ~ 60\n>\n50 & 50\n3\n40 8 40\nQ\n30 < 30\n20 \u201c= k-means 20 means\n-B-Hessian-weighted k-means -B-Hessian-weighted k-means\n10}----@=--G}-O-Uniform quantization 10 \u00a9 Uniform quantization\n0 \u201cIterative ECSQ 0 \u201cIterative ECSQ\no 1 2 3 4 5 6 7 8 9 o 1 2 3 4 5 6 7 8 9\nCodeword length (bits) Codeword length (bits)\n(a) Fixed-length coding (b) Fixed-length coding + fine-tuning\n100 100\n90 go hes = 2. 90 {R eEo-6\n80 Sage 80\n70 i | ~ 70\n2 | i \u00a3\n60 reopen 60\ni >\n50 osc 2 50\n: i Ei\n40 i 40\n30 ra 30\n20 AQ... -e-k-means 20 -6- k-means\n-8-Hessian-weighted k-means -8-Hessian-weighted k-means\n10 \u00a9 Uniform quantization 10 \u00a9 Uniform quantization\n0 \u201cIterative ECSQ 0 \u201cIterative ECSQ\no i 2 3 4 6 7. 8 9 o i 2 3 4 6 7. 8 9\n\n5\nAverage codeword length (bits)\n\n(c) Huffman coding\n\n5\nAverage codeword length (bits)\n\n(d) Huffman coding + fine-tuning\nFigure 1: Accuracy versus average codeword length per network parameter after network quantiza-\ntion for 32-layer ResNet.\nlayers followed by a soft-max layer. It has total 431,080 parameters and achieves 99.25% accuracy\nFor a pruned model, we prune 91% of the original network parameters and fine-tune the rest.\nSecond, we experiment our network quantization schemes for the CIFAR-10 data set\nwith a pre-trained 32-layer ResNet (He et all, 2015). The 32-layer ResNet consists of 464, 154\nparameters in total and achieves 92.58% accuracy. For a pruned model, we prune 80% of the origina\u2019\nnetwork parameters and fine-tune the rest.\nThird, we evaluate our network quantization schemes with AlexNet (Krizhevsky et al],[2012) for the\nImageNet ILSVRC-2012 data set (Russakovsky et al.,[2015). We obtain a pre-trained AlexNet Caffe\nmodel, which achieves 57.16% top-1 accuracy. For a pruned model, we prune 89% parameters and\nfine-tune the rest. In fine-tuning, the Adam SGD optimizer is used in order to avoid the computation\nof Hessian by utilizing its alternative (see SectionB.5). However, the pruned model does not recover\nthe original accuracy after fine-tuning with the Adam method; the top-1 accuracy recovered after\npruning and fine-tuning is 56.00%. We are able to find a better pruned model achieving the original\naccuracy by pruning and retraining iteratively (Han et al,/2015b), which is however not used here."}, {"section_index": "9", "section_name": "5.2 EXPERIMENT RESULTS", "section_text": "We first present the quantization results without pruning for 32-layer ResNet in Figure [I] where\nthe accuracy of 32-layer ResNet is plotted against the average codeword length per network pa-\nrameter after quantization. When fixed-length coding is employed, the proposed Hessian-weighted\nk-means clustering method performs the best, as expected. Observe that Hessian-weighted k-means\nclustering yields better accuracy than others even after fine-tuning. On the other hand, when Huff-\nman coding is employed, uniform quantization and the iterative algorithm for ECSQ outperform\nHessian-weighted k-means clustering and k-means clustering. However, these two ECSQ solutions\nunderperform Hessian-weighted k-means clustering and even k-means clustering when fixed-length\ncoding is employed since they are optimized for optimal variable-length coding.\nFigure 2: Accuracy versus average codeword length per network parameter after network quanti\nzation, Huffman coding and fine-tuning for LeNet and 32-layer ResNet when Hessian is compute\nwith 50,000 or 1,000 samples and when the square roots of the second moment estimates of gradient\nare used instead of Hessian as an alternative.\nFigure[2]shows the performance of Hessian-weighted k-means clustering when Hessian is computed\nwith a small number of samples (1,000 samples). Observe that even using the Hessian computed\nwith a small number of samples yields almost the same performance. We also show the performance\nof Hessian-weighted k-means clustering when an alternative of Hessian is used instead of Hessian as\nexplained in Section|3.5] In particular, the square roots of the second moment estimates of gradients\nare used instead of Hessian, and using this alternative provides similar performance to using Hessian.\nIn Table[I] we summarize the compression ratios that we can achieve with different network quanti.\nzation methods for pruned models. The original network parameters are 32-bit float numbers. Using\nthe simple uniform quantization followed by Huffman coding, we achieve the compression ratio:\nof 51.25, 22.17 and 40.65 (i.e., the compressed model sizes are 1.95%, 4.51% and 2.46% of the\noriginal model sizes) for LeNet, 32-layer ResNet and AlexNet, respectively, at no or marginal per.\nformance loss. Observe that the loss in the compressed AlexNet is mainly due to pruning. Here, we\nalso compare our network quantization results to the ones jltanet 0159 Note that layer-by:\nlayer quantization with k-means clustering is evaluated in (2015a) while our quantizatior\n\nschemes including k-means clustering are employed to quantize network parameters of all layer:\ntogether at once (see Section\nThis paper investigates the quantization problem of network parameters in deep neural networks\nWe identify the suboptimality of the conventional quantization method using k-means clustering\nand newly design network quantization schemes so that they can minimize the performance loss due\nto quantization given a compression ratio constraint. In particular, we analytically show that Hessiar\ncan be used as a measure of the importance of network parameters and propose to minimize Hessian-\nweighted quantization errors in average for clustering network parameters to quantize. Hessian-\nweighting is beneficial in quantizing all of the network parameters together at once since it car\nhandle the different impact of quantization errors properly not only within layers but also across\nlayers. Furthermore, we make a connection from the network quantization problem to the entropy-\nconstrained data compression problem in information theory and push the compression ratio to the\nlimit that information theory provides. Two efficient heuristic solutions are presented to this end.\ni.e., uniform quantization and an iterative solution for ECSQ. Our experiment results show that the\nproposed network quantization schemes provide considerable gain over the conventional methoc\nusing k-means clustering, in particular for large and deep neural networks."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Fixed point optimization of deep convolutional\nneural networks for object recognition. In JEEE International Conference on Acoustics, Speech\naccuracy (6)\n\n100\n\n100\n99.5 90\n99 80\n98.5 z 70\n98 ~ 60\n>\n97.5 8 50\nf\n97 5 40\n3\n96.5 < 30\n96[-- k-means 20{-S-k-means\n-B-Hessian-weighted k-means (50,000) -a-Hessian-weighted k-means (50,000)\n95.5]-\u00a9-Hessian-weighted k-means (1,000) 10!.0-Hessian-weighted k-means (1,000)\ngs {2 AltsHessian-weighted k-means ol 2 AltsHessian-weighted k-means\n0 1 2 3 4 5 6 o 1 2 3 4 5 6 7 8\n\nAverage codeword length (bits)\n\n(a) LeNet\n\nAverage codeword length (bits)\n\n(b) ResNet\nTable 1: Summary of network quantization results with Huffman coding for pruned models\n+ Quantization all layers\n+ Huffman coding\nPhilip A Chou, Tom Lookabaugh, and Robert M Gray. Entropy-constrained vector quantization.\nIEEE Transactions on Acoustics, Speech, and Signal Processing, 37(1):31\u201442. 1989.\nMatthieu Courbariaux, Jean-Pierre David, and Yoshua Bengio. Training deep neural networks with\nlow precision multiplications. arXiv preprint arXiv: 1412.7024, 2014.\nThomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012\nHerbert Gish and John Pierce. Asymptotically efficient quantizing. IEEE Transactions on Informa.\ntion Theory, 14(5):676\u2014-683, 1968.\nYunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net-\nworks using vector quantization. arXiv preprint arXiv: 1412.6115. 2014.\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks\nwith pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a.\nAACCULACY \u201cYo | COTpression\nratio\nOriginal mode! 99.25 -\nPruned model 99.27 10.13\nDens k-means 99.27 44.58\nruning sci \u2018| ans\nLeNet cos Hessian-weighted k-means 99.27 47.16\n+ Quantization all layers . se\n+ Huffman coding Uniform quantization 99.28 51.25\nIterative ECSQ 99.27 49.0\nDeep compression (Han et al., 2015a 99.26 39.00\nOriginal mode! 92.58 -\nPruned model 92.58 4.52\n. k-means 92.64 18.25\n. Pruning . : ane\nResNet cos Hessian-weighted k-means 92.67 20.5\n+ Quantization all layers . se\nan eadi Uniform quantization 92.68 22.17\n+ Huffman coding .\nIterative ECSQ 92.73 21.0\nDeep compression (Han et al., 2015a N/A N/A\nOriginal mode! 57.16 -\nPruned model 56.00 7.9\nAlexNet Pruning ; k-means / / 56.12 30.53\n+ Quantization all layers | Alt-Hessian-weighted k-means 56.04 33.7\n+ Huffman coding Uniform quantization 56.20 40.65\nDeep compression (Han et al., 2015a 57.22 35.00\nA-Mmeans\nHessian-weighted k-means\nUniform quantization\nIterative ECSQ\n\nrs\nhMessian-weignted k-means\nUniform quantization\nIterative ECSQ\nMatthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural\nnetworks with binary weights during propagations. In Advances in Neural Information Processing\nSystems, pp. 3123-3131, 2015.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and\nstochastic optimization. Journal of Machine Learning Research. 12(Jul):2121\u20142159. 2011.\nAlex Krizhevsky. Learning multiple layers of features from tiny images. 2009.\nYann Le Cun. Mode\u00e9les connexionnistes de l\u2019apprentissage. PhD thesis, Paris 6, 1987.\nVadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. In Proceed\nings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554-2564, 2016\nVadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky.\nSpeeding-up convolutional neural networks using fine-tuned CP-decomposition. arXiv preprin\narXiv: 1412.6553, 2014.\nYann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal\nbrain damage. In Advances in Neural Information Processing Systems, pp. 598-605, 1989.\nAlexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neural\nnetworks. In Advances in Neural Information Processing Systems. pp. 442\u2014450. 2015.\nMohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: Imagenet\nclassification using binary convolutional neural networks. arXiv preprint arXiv: 1603.05279\n2016.\nBabak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain\nsurgeon. In Advances in Neural Information Processing Systems, pp. 164-171, 1993.\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks\nwith low rank exnansions. In Proceedings of the British Machine Vision Conference. 2014.\nYong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Com-\npression of deep convolutional neural networks for fast and low power mobile applications. arXiv\npreprint arXiv: 1511.06530, 2015.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-\nlutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097-1105,\n2012.\nYann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to\ndocument recognition. Proceedings of the IEEE, 86(11):2278\u20142324, 1998.\nDarryl D Lin, Sachin S Talathi, and V Sreekanth Annapureddy. Fixed point quantization of deep\nconvolutional networks. arXiv preprint arXiv: 1511.06393,2015a.\nZhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with\nfew multiplications. arXiv preprint arXiv: 1510.03009, 2015b.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imag\u00ab\nrecognition. arXiv preprint arXiv: 1409.1556, 2014.\nJian Xue, Jinyu Li, and Yifan Gong. Restructuring of deep neural network acoustic models with\nsingular value decomposition. In JIVTERSPEECH, pp. 2365-2369, 2013.\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701,\n2012."}, {"section_index": "11", "section_name": "A.1 FURTHER DISCUSSION ON THE HESSIAN-WEIGHTED QUANTIZATION ERROR", "section_text": "The diagonal approximation for Hessian simplifies the optimization problem as well as its solutiot\nfor network quantization. This simplification comes with some performance loss. We conjecture tha\nthe loss due to this approximation is small. The reason is that the contributions from off-diagona\nterms are not always additive and their summation may end up with a small value. However, diagona\nterms are all non-negative and therefore their contributions are always additive. We do not verify thi\nconjecture in this paper since solving the problem without diagonal approximation is too complex\nwe even need to compute the whole Hessian matrix, which is also too costly.\nObserve that the relation of the Hessian-weighted distortion measure to the quantization loss hold\nfor any model for which the objective function can be approximated as a quadratic function witl\nrespect to the parameters to quantize in the model. Hence, the quantization methods proposed it\nthis paper to minimize the Hessian-weighted distortion measure are not specific to neural network\nbut are generally applicable to quantization of parameters of any model whose objective function i\nlocally quadratic with respect to its parameters approximately.\nFinally, we do not consider the interactions between quantization and retraining in our formulation\nin Section [3.2] We analyze the expected loss due to quantization assuming no further retraining\nand focus on finding optimal network quantization schemes that minimize the performance loss. In\nour experiments, however, we further fine-tune the quantized values (cluster centers) so that we can\nrecover the loss due to quantization and improve the performance."}, {"section_index": "12", "section_name": "A.2 EXPERIMENT RESULTS FOR UNIFORM QUANTIZATION", "section_text": "We compare uniform quantization with non-weighted mean and uniform quantization with Hessian-\nweighted mean in Figure B] which shows that uniform quantization with Hessian-weighted mean\nslightly outperforms uniform quantization with non-weighted mean.\nFigure 3: Accuracy versus average codeword length per network parameter after network quanti-\nzation, Huffman coding and fine-tuning for 32-layer ResNet when uniform quantization with non-\nweighted mean and uniform quantization with Hessian-weighted mean are used.\nIn order to solve the ECSQ problem for network quantization, we define a Lagrangian cost function:\nk\nCh) =D+\\H = \u2014 wh >\nAt\n\n=d),(i.7)\naccuracy (8)\n\n100 100\n90 90 7 gorean\u00ae 2\n80 80 coon :\n70 ~ 70 |\n60 ~ 60\n>\n50 & 50 7 {\n\u00a7\n40 B 40\nQ\n30 < 30\n20 20\n1-6 Uniform with non-weighted mean 1{-@- Uniform with non-weighted mean\n-8-Uniform with Hessian-weighted mean -8-Uniform with Hessian-weighted mean\n0 1 2 3 4 0 1 2 3 4\nAverage codeword length (bits) Average codeword length (bits)\n\n(a) Huffman coding (b) Huffman coding + fine-tuning\nk k\nwd Ss hiilwi \u2014\u00a2;|\u00b0, H = -Sp; logy pj.\n\nwie; j=l\nAlgorithm 1 Iterative solution for entropy-constrained network quantization\nA heuristic iterative algorithm to solve this method of Lagrange multipliers for network quantization\nis presented in Algorithm [I] It is similar to Lloyd\u2019s algorithm for k-means clustering. The key\ndifference is how to partition network parameters at the assignment step. In Lloyd\u2019s algorithm, the\nEuclidean distance (quantization error) is minimized. For ECSQ, the individual Lagrangian cost\nfunction, i.e., dy (i,j) in (12), is minimized instead, which includes both quantization error anc\nexpected codeword length after entropy coding.\nch) Ee U fw} for l= argmin {isl - cP \u2014 logs pi\nj\nwie) higwi\nsecert)) hi\n\nand\n\npty\n\nJ\n\n(n+1\nic) |\n\nN\nargmin J)(Ci,Co,...,\nC1,C2,...,Ck"}]
BJh6Ztuxl
[{"section_index": "0", "section_name": "FINE-GRAINED ANALYSIS OF SENTENCE\nEMBEDDINGS USING AUXILIARY PREDICTION TASKS", "section_text": "Yossi Adi!:?, Einat Kermany\u201d, Yonatan Belinkov\u00ae, Ofer Lavi, Yoav Goldberg!\nThere is a lot of research interest in encoding variable length sentences into fixed\nlength vectors, in a way that preserves the sentence meanings. Two common\nmethods include representations based on averaging word vectors, and represen-\ntations based on the hidden states of recurrent neural networks such as LSTMs.\nThe sentence vectors are used as features for subsequent machine learning tasks\nor for pre-training in the context of deep learning. However, not much is known\nabout the properties that are encoded in these sentence representations and about\nthe language information they capture.\nWe propose a framework that facilitates better understanding of the encoded rep-\nresentations. We define prediction tasks around isolated aspects of sentence struc-\nture (namely sentence length, word content, and word order), and score repre-\nsentations by the ability to train a classifier to solve each prediction task when\nusing the representation as input. We demonstrate the potential contribution of the\napproach by analyzing different sentence representation mechanisms. The analy-\nsis sheds light on the relative strengths of different sentence embedding methods\nwith respect to these low level prediction tasks, and on the effect of the encoded\nvectar\u2019s dimencionality an the recultino renrecentatione"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "While sentence embeddings or sentence representations play a central role in recent deep learning\napproaches to NLP, little is known about the information that is captured by different sentence em-\nbedding learning mechanisms. We propose a methodology facilitating fine-grained measurement\nof some of the information encoded in sentence embeddings, as well as performing fine-grained\ncomparison of different sentence embedding methods.\nIn sentence embeddings, sentences, which are variable-length sequences of discrete symbols, are\nencoded into fixed length continuous vectors that are then used for further prediction tasks. A\nsimple and common approach is producing word-level vectors using, e.g., word2vec (Mikolov et al.,\n2013a;b), and summing or averaging the vectors of the words participating in the sentence. This\ncontinuous-bag-of-words (CBOW) approach disregards the word order in the sentence.!\nAnother approach is the encoder-decoder architecture, producing models also known as sequence-\nto-sequence models (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014, inter alia). In\nthis architecture, an encoder network (e.g. an LSTM) is used to produce a vector representation\nof the sentence, which is then fed as input into a decoder network that uses it to perform some\nprediction task (e.g. recreate the sentence, or produce a translation of it). The encoder and decoder\nnetworks are trained jointly in order to perform the final task."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "\u201cWe use the term CBOW to refer to a sentence representation that is composed of an average of the vectors\nof the words in the sentence, not to be confused with the training method by the same name which is used in\nthe word2vec algorithm.\nSome systems (for example in machine translation) train the system end-to-end, and use the traine\nsystem for prediction (Bahdanau et al., 2014). Such systems do not generally care about the encode\nvectors, which are used merely as intermediate values. However, another common case is to train ai\nencoder-decoder network and then throw away the decoder and use the trained encoder as a genera\nmechanism for obtaining sentence representations. For example, an encoder-decoder network cat\nbe trained as an auto-encoder, where the encoder creates a vector representation, and the decode\nattempts to recreate the original sentence (Li et al., 2015). Similarly, Kiros et al. (2015) train a net\nwork to encode a sentence such that the decoder can recreate its neighboring sentences in the text\nSuch networks do not require specially labeled data, and can be trained on large amounts of unanno\ntated text. As the decoder needs information about the sentence in order to perform well, it is clea\nthat the encoded vectors capture a non-trivial amount of information about the sentence, makin;\nthe encoder appealing to use as a general purpose, stand-alone sentence encoding mechanism. Thi\nsentence encodings can then be used as input for other prediction tasks for which less training dat\nis available (Dai & Le, 2015). In this work we focus on these \u201cgeneral purpose\u201d sentence encodings\nThe resulting sentence representations are opaque, and there is currently no good way of comparin,\ndifferent representations short of using them as input for different high-level semantic tasks (e.g\nsentiment classification, entailment recognition, document retrieval, question answering, sentenc\nsimilarity, etc.) and measuring how well they perform on these tasks. This is the approach take:\nby Li et al. (2015), Hill et al. (2016) and Kiros et al. (2015). This method of comparing sentenc\nembeddings leaves a lot to be desired: the comparison is at a very coarse-grained level, does not tel\nus much about the kind of information that is encoded in the representation, and does not help u\nform generalizable conclusions.\nOur Contribution We take a first step towards opening the black box of vector embeddings fo:\nsentences. We propose a methodology that facilitates comparing sentence embeddings on a muct\nfiner-grained level, and demonstrate its use by analyzing and comparing different sentence repre-\nsentations. We analyze sentence representation methods that are based on LSTM auto-encoders anc\nthe simple CBOW representation produced by averaging word2vec word embeddings. For each o!\nCBOW and LSTM auto-encoder, we compare different numbers of dimensions, exploring the ef.\nfect of the dimensionality on the resulting representation. We also provide some comparison to the\nskip-thought embeddings of Kiros et al. (2015).\nIn this work, we focus on what are arguably the three most basic characteristics of a sequence\nits length, the items within it, and their order. We investigate different sentence representation:\nbased on the capacity to which they encode these aspects. Our analysis of these low-level propertie:\n\nleads to interesting, actionable insights, exposing relative strengths and weaknesses of the differen\nrepresentations.\nLimitations Focusing on low-level sentence properties also has limitations: The tasks focus o1\nmeasuring the preservation of surface aspects of the sentence and do not measure syntactic an\u00a2\nsemantic generalization abilities; the tasks are not directly related to any specific downstream appli\ncation (although the properties we test are important factors in many tasks \u2014 knowing that a mode\nis good at predicting length and word order is likely advantageous for syntactic parsing, while mod\nels that excel at word content are good for text classification tasks). Dealing with these limitation!\nrequires a complementary set of auxiliary tasks, which is outside the scope of this study and is lef\nfor future work.\nThe study also suffers from the general limitations of empirical work: we do not prove genera\ntheorems but rather measure behaviors on several data points and attempt to draw conclusions from\nthese measurements. There is always the risk that our conclusions only hold for the datasets or\nwhich we measured, and will not generalize. However, we do consider our large sample of sentences\n\nfrom Wikipedia to be representative of the English language, at least in terms of the three basic\nsentence properties that we study.\n\u00bb Sentence representations based on averaged word vectors are surprisingly effective, and encod\na non-trivial amount of information regarding sentence length. The information they contai\ne Sentence representations based on averaged word vectors are surprisingly effective, and encode\na non-trivial amount of information regarding sentence length. The information they contain\nmanner (due to regularities in the natural language data).\nLSTM auto-encoders are very effective at encoding word order and word content.\nIncreasing the number of dimensions benefits some tasks more than others.\n\nAdding more hidden units sometimes degrades the encoders\u2019 ability to encode word content. Thi\ndegradation is not correlated with the BLEU scores of the decoder, suggesting that BLEU ove\nthe decoder output is sub-optimal for evaluating the encoders\u2019 quality.\n\nLSTM encoders trained as auto-encoders do not rely on ordering patterns in the training sentence\nwhen encoding novel sentences. while the skin-thousht encoders do rely on such patterns."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Word-level distributed representations have been analyzed rather extensively, both empirically anc\ntheoretically, for example by Baroni et al. (2014), Levy & Goldberg (2014) and Levy et al. (2015)\nIn contrast, the analysis of sentence-level representations has been much more limited. Commonly\nused approaches is to either compare the performance of the sentence embeddings on down-strear\ntasks (Hill et al., 2016), or to analyze models, specifically trained for predefined task (Schmalltz\net al., 2016; Sutskever et al., 2011).\nWhile the resulting analysis reveals differences in performance of different models, it does not ade-\nquately explain what kind of linguistic properties of the sentence they capture. Other studies analyze\nthe hidden units learned by neural networks when training a sentence representation model (Elman\n1991; Karpathy et al., 2015; Kadar et al., 2016). This approach often associates certain linguistic\naspects with certain hidden units. Kadar et al. (2016) propose a methodology for quantifying the\ncontribution of each input word to a resulting GRU-based encoding. These methods depend on the\nspecific learning model and cannot be applied to arbitrary representations. Moreover, it is still no\nclear what is captured by the final sentence embeddings.\nOur work is orthogonal and complementary to the previous efforts: we analyze the resulting sentence\nembeddings by devising auxiliary prediction tasks for core sentence properties. The methodology\nwe purpose is general and can be applied to any sentence representation model."}, {"section_index": "4", "section_name": "3 APPROACH", "section_text": "We aim to inspect and compare encoded sentence vectors in a task-independent manner. The main\nidea of our method is to focus on isolated aspects of sentence structure, and design experiments to\nmeasure to what extent each aspect is captured in a given representation.\nIn each experiment, we formulate a prediction task. Given a sentence representation method, we\ncreate training data and train a classifier to predict a specific sentence property (e.g. their length\nbased on their vector representations. We then measure how well we can train a model to perform the\ntask. The basic premise is that if we cannot train a classifier to predict some property of a sentenc\u00ab\nbased on its vector representation, then this property is not encoded in the representation (or rather\nnot encoded in a useful way, considering how the representation is likely to be used)."}, {"section_index": "5", "section_name": "3.1 THE PREDICTION TASKS", "section_text": "We now turn to describe the specific prediction tasks. We use lower case italics (s, w) to refer\nto sentences and words, and boldface to refer to their corresponding vector representations (s, w)\nWhen more than one element is considered, they are distinguished by indices (w;. wo. Wi. Wo).\nOur underlying corpus for generating the classification instances consists of 200,000 Wikipedia\nsentences, where 150,000 sentences are used to generate training examples, and 25,000 sentences\ncan also be used to reconstruct a non-trivial amount of the original word order in a probabilistic\nmanner (due to regularities in the natural language data).\nThe experiments in this work focus on low-level properties of sentences \u2014 the sentence length, the\nidentities of words in a sentence, and the order of the words. We consider these to be the core\nelements of sentence structure. Generalizing the approach to higher-level semantic and syntactic\nproperties holds great potential, which we hope will be explored in future work, by us or by others.\nare used for each of the test and development examples. These sentences are a subset of the training\nset that was used to train the original sentence encoders. The idea behind this setup is to test the\nmodels on what are presumably their best embeddings.\nLength Task This task measures to what extent the sentence representation encodes its length.\nGiven a sentence representation s \u20ac R*, the goal of the classifier is to predict the length (number\nof words) in the original sentence s. The task is formulated as multiclass classification, with eight\noutput classes corresponding to binned lengths.\u201d The resulting dataset is reasonably balanced, with\na majority class (lengths 5-8 words) of 5,182 test instances and a minority class (34-70) of 1,084 test\ninstances. Predicting the majority class results in classification accuracy of 20.1%.\nWord-content Task This task measures to what extent the sentence representation encodes the\nidentities of words within it. Given a sentence representation s \u20ac R* and a word representation\nw \u20ac R%, the goal of the classifier is to determine whether w appears in the s, with access to neither\nw nor s. This is formulated as a binary classification task, where the input is the concatenation of s\nand w.\nTo create a dataset for this task, we need to provide positive and negative examples. Obtaining\npositive examples is straightforward: we simply pick a random word from each sentence. Fo1\nnegative examples, we could pick a random word from the entire corpus. However, we found that\nsuch a dataset tends to push models to memorize words as either positive or negative words, instead\nof finding their relation to the sentence representation. Therefore, for each sentence we pick as a\nnegative example a word that appears as a positive example somewhere in our dataset, but does\nnot appear in the given sentence. This forces the models to learn a relationship between word and\nsentence representations. We generate one positive and one negative example from each sentence.\nThe dataset is balanced, with a baseline accuracy of 50%.\nWord-order Task This task measures to what extent the sentence representation encodes word\norder. Given a sentence representation s \u20ac R* and the representations of two words that appear in\nthe sentence, w1, w2 \u20ac R%, the goal of the classifier is to predict whether w appears before or after\nwz in the original sentence s. Again, the model has no access to the original sentence and the two\nwords. This is formulated as a binary classification task, where the input is a concatenation of the\nthree vectors s, w, and wo.\nFor each sentence in the corpus, we simply pick two random words from the sentence as a positive\nexample. For negative examples, we flip the order of the words. We generate one positive and one\nnegative example from each sentence. The dataset is balanced, with a baseline accuracy of 50%."}, {"section_index": "6", "section_name": "4 SENTENCE REPRESENTATION MODELS", "section_text": "Despite its obliviousness to word order, CBOW has proven useful in different tasks (Hill et al., 2016\nand is easy to compute, making it an important model class to consider.\nEncoder-Decoder (ED) The encoder-decoder framework has been successfully used in a number\nof sequence-to-sequence learning tasks (Sutskever et al., 2014; Bahdanau et al., 2014; Dai & Le,\n2015; Li et al., 2015). After the encoding phase, a decoder maps the sentence representation back to\nthe sequence of words:\nDEC:sER* +> 85 = {wy, we, ..., wn }\nThe encoding process usually assumes a vector representation w; \u20ac R\u00a2@ for each word in the vo-\ncabulary. In general, the word and sentence embedding dimensions, d and k, need not be the same.\nThe word vectors can be learned together with other encoder parameters or pre-trained. Below we\ndescribe different instantiations of ENC.\nContinuous Bag-of-words (CBOW) This simple yet effective text representation consists of per-\nforming element-wise averaging of word vectors that are obtained using a word-embedding method\nsuch as word2vec.\nva .\" a =\"\n2 Bas] lam coow bo BM =<\n\nge 5, jes eo oiey \u00e9 ee\n\ni i\" ps Ea a\n\n\u00a3 ao] 8 os pe 3 7 as\n3 a 2\u00b0 a 3 va foom |\nBeh Moan 8sf- PG sol. ee\n\nRepresentation dimensions\n\n(a) Length test.\n\nRepresentation dimensions\n\n(b) Content test.\n\nRepresentation dimensions\n\n(c) Order test.\nFigure 1: Task accuracy vs. embedding size for different models; ED BLEU scores given for reference\nHere we investigate the specific case of an auto-encoder, where the entire encoding-decoding proces:\ncan be trained end-to-end from a corpus of raw texts. The sentence representation is the final outpu\nvector of the encoder. We use a long short-term memory (LSTM) recurrent neural network (Hochre-\niter & Schmidhuber, 1997; Graves et al., 2013) for both encoder and decoder. The LSTM decode\nis similar to the LSTM encoder but with different weights."}, {"section_index": "7", "section_name": "5S EXPERIMENTAL SETUP", "section_text": "The bag-of-words (CBOW) and encoder-decoder models are trained on | million sentences from :\n2012 Wikipedia dump with vocabulary size of 50,000 tokens. We use NLTK (Bird, 2006) for tok\nenization, and constrain sentence lengths to be between 5 and 70 words. For both models we contro\nthe embedding size k and train word and sentence vectors of sizes k \u20ac {100, 300, 500, 750, 1000}\nMore details about the experimental setup are available in the Appendix."}, {"section_index": "8", "section_name": "6.1 LENGTH EXPERIMENTS", "section_text": "We begin by investigating how well the different representations encode sentence length. Figure 1:\nshows the performance of the different models on the length task, as well as the BLEU obtained by\nthe LSTM encoder-decoder (ED).\nWith enough dimensions, the LSTM embeddings are very good at capturing sentence length, ob-\ntaining accuracies between 82% and 87%. Length prediction ability is not perfectly correlated with\nBLEU scores: from 300 dimensions onward the length prediction accuracies of the LSTM remain\nrelatively stable, while the BLEU score of the encoder-decoder model increases as more dimensions\nare added.\nSomewhat surprisingly, the CBOW model also encodes a fair amount of length information, with\nlength prediction accuracies of 45% to 65%, way above the 20% baseline. This is remarkable, as the\nCBOW representation consists of averaged word vectors, and we did not expect it to encode length\nat all. We return to CBOW\u2019s exceptional performance in Section 7.\nTo what extent do the different sentence representations encode the identities of the words in th\nsentence? Figure 1b visualizes the performance of our models on the word content test.\nAll the representations encode some amount of word information, and clearly outperform the ran-\ndom baseline of 50%. Some trends are worth noting. While the capacity of the LSTM encode:\nto preserve word identities generally increases when adding dimensions, the performance peaks a\n750 dimensions and drops afterwards. This stands in contrast to the BLEU score of the respective\n[n this section we provide a detailed description of our experimental results along with their analysis.\nFor each of the three main tests \u2014 length, content and order \u2014 we investigate the performance of\ndifferent sentence representation models across embedding size.\nCBOW representations with low dimensional vectors (100 and 300 dimensions) perform exception.\nally well, outperforming the more complex, sequence-aware models by a wide margin. If your task\nrequires access to word identities, it is worth considering this simple representation. Interestingly\nCBOW scores drop at higher dimensions."}, {"section_index": "9", "section_name": "6.3. WORD ORDER EXPERIMENTS", "section_text": "Figure lc shows the performance of the different models on the order test. The LSTM encoders are\nvery capable of encoding word order, with LSTM-1000 allowing the recovery of word order in 91%\nof the cases. Similar to the length test, LSTM order prediction accuracy is only loosely correlated\nwith BLEU scores. It is worth noting that increasing the representation size helps the LSTM-encoder\nto better encode order information.\nSurprisingly, the CBOW encodings manage to reach an accuracy of 70% on the word order task\n20% above the baseline. This is remarkable as, by definition, the CBOW encoder does not attemp\nto preserve word order information. One way to explain this is by considering distribution patterns\nof words in natural language sentences: some words tend to appear before others. In the next sectior\nwe analyze the effect of natural language on the different models.\nNatural language imposes many constraints on sentence structure. To what extent do the differ-\nent encoders rely on specific properties of word distributions in natural language sentences when\nencoding sentences?\nTo account for this, we perform additional experiments in which we attempt to control for the effec\nof natural language.\nHow can CBOW encode sentence length? Is the ability of CBOW embeddings to encode length\nrelated to specific words being indicative of longer or shorter sentences? To control for this, we\ncreated a synthetic dataset where each word in each sentence is replaced by a random word from\nthe dictionary and re-ran the length test for the CROW embeddings using this dataset. As Figure 2a\nshows, this only leads to a slight decrease in accuracy, indicating that the identity of the words is not\nthe main component in CBOW\u2019s success at predicting length.\ns LJ cEOW .\n\nied CBOW syn sent .\n\ns 2 \u2014\n\nFA 5 .\n50] 58\n\nq Fs .\n\nRepresentation dimensions Sentence length\nAn alternative explanation for CBOW\u2019s ability to encode sentence length is given by considering the\nnorms of the sentence embeddings. Indeed, Figure 2b shows that the embedding norm decreases as\nsentences grow longer. We believe this is one of the main reasons for the strong CBOW results.\nencoder-decoder models. We hypothesize that this occurs because a sizable part of the auto-encoder\nperformance comes from the decoder, which also improves as we add more dimensions. At 1000 di-\nmensions, the decoder\u2019s language model may be strong enough to allow the representation produced\nby the encoder to be less informative with regard to word content.\nWhile the correlation between the number of averaged vectors and the resulting norm surprised us,\nin retrospect it is an expected behavior that has sound mathematical foundations. To understand\nthe behavior, consider the different word vectors to be random variables, with the values in each\ndimension centered roughly around zero. Both central limit theorem and Hoeffding\u2018s inequality tell\nus that as we add more samples, the expected average of the values will better approximate the true\nmean, causing the norm of the average vector to decrease. We expect the correlation between the\nsentence length and its norm to be more pronounced with shorter sentences (above some number ot\nsamples we will already be very close to the true mean, and the norm will not decrease further), <\nbehavior which we indeed observe in practice.\nHow does CBOW encode word order? The surprisingly strong performance of the CROW model\non the order task made us hypothesize that much of the word order information is captured in general\nnatural language word order statistics.\nTo investigate this, we re-run the word order tests, but this time drop the sentence embedding in\ntraining and testing time, learning from the word-pairs alone. In other words, we feed the network as\ninput two word embeddings and ask which word comes first in the sentence. This test isolates general\n\nword order statistics of language from information that is contained in the sentence embedding (Fig.\n3).\nIhe difference between including and remov-\ning the sentence embeddings when using the\nCBOW model is minor, while the LSTM-ED\nsuffers a significant drop. Clearly, the LSTM-\nED model encodes word order, while the pre-\ndiction ability of CBOW is mostly explained by\ngeneral language statistics. However, CBOW\ndoes benefit from the sentence to some extent:\nwe observe a gain of ~3% accuracy points\nwhen the CBOW tests are allowed access to the\nsentence representation. This may be explained\nby higher order statistics of correlation between\nword order patterns and the occurrences of spe-\ncific words."}, {"section_index": "10", "section_name": "How important is English word order for en-", "section_text": "Results are presented in Fig. 4. When considering CBOW embeddings, word order accuracy drops\nto chance level, as expected, while results on the other tests remain the same. Moving to the LSTM\nencoder-decoder, the results on all three tests are comparable to the ones using non-permuted sen-\ntences. These results are somewhat surprising since the models were originally trained on \u201creal\u201d,\nnon-permuted sentences. This indicates that the LSTM encoder-decoder is a general-purpose se-\nquence encoder that for the most part does not rely on word ordering properties of natural language\nwhen encoding sentences. The small and consistent drop in word order accuracy on the permuted\nsentences can be attributed to the encoder relying on natural language word order to some extent.\nbut can also be explained by the word order prediction task becoming harder due to the inability tc\nLength prediction accuracy\n\nRepresentation dimensions\n\n(a) Length test.\n\nContent prediction accuracy\n\nRepresentation dimensions\n\n(b) Content test.\n\nOrder prediction accuracy\n\nRepresentation dimensions\n\n(c) Order test.\nLength prediction accuracy\n\nz 7\u201d\n= : Ea\nST em cao 2 i\n\nRepresentation dimensions\n\nRepresentation dimensions\n\nRepresentation dimensions\nFigure 4: Results for length, content and order tests on natural and permuted sentences.\njon accuracy\n\n=)\n\nm= ceow\n\n= EDnosent\n\n= + CBOWno sent\n\n300 500 750\nRepresentation dimensions\n\ni000\nOrder prediction accuracy\n\n=)\n\nm= ceow\n\n= EDnosent\n\n= + CBOWno sent\n\n100\n\n300 500 750\nRepresentation dimensions\nFigure 3: Order accuracy w/ and w/o sentence repre-\nsentation for ED and CROW models.\ncoding sentences? To what extent are the models trained to rely on natural language word order\nwhen encoding sentences? To control for this, we create a synthetic dataset, PERMUTED, in which\nthe word order in each sentence is randomly permuted. Then, we repeat the length, content and\norder experiments using the PERMUTED dataset (we still use the original sentence encoders that are\ntrained on non-permuted sentences). While the permuted sentence representation is the same for\nCBOW. it is completely different when generated by the encoder-decoder.\nuse general word order statistics. The results suggest that a trained encoder will transfer well acros:\ndifferent natural language domains, as long as the vocabularies remain stable. When considerin;\nthe decoder\u2019s BLEU score on the permuted dataset (not shown), we do see a dramatic decreas\nin accuracy. For example, LSTM encoder-decoder with 1000 dimensions drops from 32.5 to 8.7\nBLEU score. These results suggest that the decoder, which is thrown away, contains most of th\nlanguage-specific information.\nIn addition to the experiments on CBOW and LSTM-encoders, we also experiment with the skip-\nthought vectors model (Kiros et al., 2015). This model extends the idea of the auto-encoder to\nneighboring sentences.\nGiven a sentence s;, it first encodes it using an RNN, similar to the auto-encoder model. However,\ninstead of predicting the original sentence, skip-thought predicts the preceding and following sen-\ntences, s;_; and s;,,. The encoder and decoder are implemented with gated recurrent units (Cho\net al., 2014).\nHere, we deviate from the controlled environment and use the author\u2019s provided model? with the\nrecommended embeddings size of 4800. This makes the direct comparison of the models \u201cunfair\u201d.\nHowever, our aim is not to decide which is the \u201cbest\u201d\u201d model but rather to show how our method can\nbe used to measure the kinds of information captured by different representations.\nTable 1 summarizes the performance of the skip-thought embeddings in each of the prediction tasks\non both the PERMUTED and original dataset.\nTable 1: Classification accuracy for the prediction tasks using skip-thought embeddings."}, {"section_index": "11", "section_name": "9 CONCLUSION", "section_text": "We presented a methodology for performing fine-grained analysis of sentence embeddings using\nauxiliary prediction tasks. Our analysis reveals some properties of sentence embedding methods:\ne CBOW is surprisingly effective \u2014 in addition to being very strong at content, it is also predictiv\nof length, and can be used to reconstruct a non-trivial amount of the original word order. 30\ndimensions perform best, with greatly degraded word-content prediction performance on highe\ndimensions.\n\ne With enough dimensions, LSTM auto-encoders are very effective at encoding word order an\nword content information. Increasing the dimensionality of the LSTM encoder does not signi\nicantly improve its ability to encode length, but does increase its ability to encode content an\norder information. 500 dimensional embeddings are already quite effective for encoding wor\norder, with little gains beyond that. Word content accuracy peaks at 750 dimensions and drops \u00a2\n1000, suggesting that larger is not always better.\n3https://github.com/ryankiros/skip-thoughts\nLength | Word content | Word order\nOriginal 82.1% 79.1% 81.1%\nPermuted | 68.2% 76.4% 76.5%\nThe performance of the skip-thought embeddings is well above the baselines and roughly similar\nfor all tasks. Its performance is similar to the higher-dimensional encoder-decoder models, except\nin the order task where it lags somewhat behind. However, we note that the results are not directly\ncomparable as skip-thought was trained on a different corpus.\nThe more interesting finding is its performance on the PERMUTED sentences. In this setting we see\na large drop. In contrast to the LSTM encoder-decoder, skip-thought\u2019s ability to predict length and\nword content does degrade significantly on the permuted sentences, suggesting that the encoding\nprocess of the skip-thought model is indeed specialized towards natural language texts."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in Neural Infor-\nmation Processing Systems, pp. 3061-3069, 2015.\nJeffrey L Elman. Distributed representations, simple recurrent networks, and grammatical structure.\nMachine learning, 7(2-3):195\u2014225, 1991.\nGeoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.\nImproving neural networks by preventing co-adaptation of feature detectors. CoRR, 2012.\nAkos Kadar, Grzegorz Chrupata, and Afra Alishahi. Representation of linguistic form and functio\nin recurrent neural networks. arXiv preprint arXiv: 1602.08952, 2016.\ne The trained LSTM encoder (when trained with an auto-encoder objective) does not rely on order-\ning patterns in the training sentences when encoding novel sequences.\nIn contrast, the skip-thought encoder does rely on such patterns. Its performance on the other\ntasks is similar to the higher-dimensional LSTM encoder, which is impressive considering it was\ntrained on a different corpus.\n\ne Finally, the encoder-decoder\u2019s ability to recreate sentences (BLEU) is not entirely indicative of\nthe quality of the encoder at representing aspects such as word identity and order. This suggests\nthat BLEU is sub-optimal for model selection.\nMarco Baroni, Georgiana Dinu, and German Kruszewski. Don\u2019t count, predict! A systematic\ncomparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the\n52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),\npp. 238-247, Baltimore, Maryland, June 2014. Association for Computational Linguistics. URL\nhtto://www.aclweb.org/anthology/P14-1023.\nRonan Collobert, Koray Kavukcuoglu, and Cl\u00e9ment Farabet. Torch7: A matlab-like environment\nfor machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF- 192376, 2011.\njohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and\notaahactiz antimivatinn The Innennal af Machine Tonening Poepnesh 19-9191.9150 9011\n<avier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In\nTonto ntinanmal Dlanmlaenn nan am Awtifiantal Intallinnwenn amd Centtieting wn 9218 3292 ON17\nAlex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-\nrent neural networks. In Proceedings of ICASSP, 2013.\nJiwei Li, Minh-Thang Luong, and Dan Jurafsky. A hierarchical neural autoencoder for paragraphs\nand documents. arXiv preprint arXiv: 1506.01057, 2015.\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represen\ntations in vector space. arXiv preprint arXiv: 1301.3781, 2013a.\nfomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen\ntations of words and phrases and their compositionality. In Advances in neural information pro\ncessing systems, pp. 3111-3119, 2013b.\nDonald B Rubin. Matching to remove bias in observational studies. Biometrics, pp. 159-183, 1973\nAllen Schmaltz, Alexander M Rush, and Stuart M Shieber. Word ordering without syntax. arXn\npreprint arXiv: 1604.08633, 2016.\nIlya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural net-\nworks. In Advances in neural information processing systems, pp. 3104-3112, 2014.\nTijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop. COURSERA: Neural networks for\nmachine learning, 2012.\nOmer Levy and Yoav Goldberg. Linguistic regularities in sparse and explicit word representations.\nIn Proc. of CONLL, pp. 171-180, Baltimore, Maryland, 2014.\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv: 1212.5701,\n2012.\nSentence Encoders The bag-of-words (CBOW) and encoder-decoder models are trained on 1\nmillion sentences from a 2012 Wikipedia dump with vocabulary size of 50,000 tokens. We use\nNLTK (Bird, 2006) for tokenization, and constrain sentence lengths to be between 5 and 70 words.\nFor the encoder-decoder models, we use an in-house implementation using the Torch7 toolkit (Col.\nlobert et al., 2011). The decoder is trained as a language model, attempting to predict the correc\nword at each time step using a negative-log-likelihood objective (cross-entropy loss over the softmay\nlayer). We use one layer of LSTM cells for the encoder and decoder using the implementation ir\nL\u00e9onard et al. (2015).\nWe use the same size for word and sentence representations (i.e. d = k), and train models o:\nsizes k \u20ac {100, 300,500,750, 1000}. We follow previous work on sequence-to-sequence learn\ning (Sutskever et al., 2014; Li et al., 2015) in reversing the input sentences and clipping gradients\nWord vectors are initialized to random values.\nWe evaluate the encoder-decoder models using BLEU scores (Papineni et al., 2002), a popular ma-\nchine translation evaluation metric that is also used to evaluate auto-encoder models (Li et al., 2015).\nBLEU score measures how well the original sentence is recreated, and can be thought of as a proxy\nfor the quality of the encoded representation. We compare it with the performance of the models\non the three prediction tasks. The results of the higher-dimensional models are comparable to those\nfound in the literature, which serves as a sanity check for the quality of the learned models.\nAuxiliary Task Classifier For the auxiliary task predictors, we use multi-layer perceptrons with\na single hidden layer and ReLU activation, which were carefully tuned for each of the tasks. We\nexperimented with several network architectures prior to arriving at this configuration.\nFurther details regarding the training and architectures of both the sentence encoders and auxilia\ntask classifiers are available in the Appendix."}, {"section_index": "13", "section_name": "ENCODER DECODER", "section_text": "Parameters of the encoder-decoder were tuned on a dedicated validation set. We experienced witt\ndifferent learning rates (0.1, 0.01, 0.001), dropout-rates (0.1, 0.2, 0.3, 0.5) (Hinton et al., 2012) anc\noptimization techniques (AdaGrad (Duchi et al., 2011), AdaDelta (Zeiler, 2012), Adam (Kingma &\nBa, 2014) and RMSprop (Tieleman & Hinton, 2012)). We also experimented with different batct\nsizes (8, 16, 32), and found improvement in runtime but no significant improvement in performance\nBased on the tuned parameters, we trained the encoder-decoder models on a single GPU (NVIDIA\nTesla K40), with mini-batches of 32 sentences, learning rate of 0.01, dropout rate of 0.1, and the\nAdaGrad optimizer; training takes approximately 10 days and is stopped after 5 epochs with no loss\nimprovement on a validation set."}, {"section_index": "14", "section_name": "PREDICTION TASKS", "section_text": "Parameters for the predictions tasks as well as classifier architecture were tuned on a dedicated vali\ndation set. We experimented with one, two and three layer feed-forward networks using ReLU (Nai\n& Hinton, 2010; Glorot et al., 2011), tanh and sigmoid activation functions. We tried different hid.\nden layer sizes: the same as the input size, twice the input size and one and a half times the inpu\nsize. We tried different learning rates (0.1, 0.01, 0.001), dropout rates (0.1, 0.3, 0.5, 0.8) and differ.\nent optimization techniques (AdaGrad, AdaDelta and Adam).\n\u201chttps://radimrehurek.com/gensim\nFor the CBOW model, we train Skip-gram word vectors (Mikolov et al., 2013a), with hierarchical-\nsoftmax and a window size of 5 words, using the Gensim implementation.* We control for the\nembedding size k and train word vectors of sizes k \u20ac {100, 300, 500, 750, 1000}.\nOur best tuned classifier, which we use for all experiments, is a feed-forward network with one\nhidden layer and a ReLU activation function. We set the size of the hidden layer to be the same siz\u00ab\nas the input vector. We place a softmax layer on top whose size varies according to the specific task\nand apply dropout before the softmax layer. We optimize the log-likelihood using AdaGrad. We\nuse a dropout rate of 0.8 and a learning rate of 0.01. Training is stopped after 5 epochs with no los:\nimprovement on the development set. Training was done on a single GPU (NVIDIA Tesla K40)."}, {"section_index": "15", "section_name": "10 ADDITIONAL EXPERIMENTS - CONTENT TASK", "section_text": "How well do the models preserve content when we increase the sentence length? In Fig. 5 we plc\ncontent prediction accuracy vs. sentence length for different models.\nContent prediction accuracy\n\n0.95\n\n0.90\n\n0.85\n\n0.80\n\n0.75\n\n0.70\n\n0.65\n\nexer\n\nCBOW 300\nBOW 100\nED 750\nED 500\nED 1000\n\n10\n\n15 20\nSentence length\nFigure 5: Content ntence length for selected models."}, {"section_index": "16", "section_name": "APPENDIX III: SIGNIFICANCE TESTS", "section_text": "In this section we report the significance tests we conduct in order to evaluate our findings. In order\nto do so, we use the paired t-test (Rubin, 1973).\nAll the results reported in the summery of findings are highly significant (p-value < 0.0001). The\nones we found to be not significant (p-value >> 0.03) are the ones which their accuracy does not\nhave much of a difference, i.e ED with size 500 and ED with size 750 tested on the word order task\n(p-value=0.11), or CBOW with dimensions 750 and 1000 (p-value=0.3).\nTable 2: P-values for ED vs. CBOW over the different dimensions and tasks. For example, in the row where\ndim equals 100, we compute the p-value of ED compared to CBOW with embed size of 100 on all three tasks.\nTable 3: P-values for ED models over the different dimensions and tas\ne eee:\n5 @ & \u00a9\na 8 & 8\n\nContent prediction accuracy\n\n\u00a9\n3\n\n0.65\n\nexer\n\nCBOW 300\nBOW 100\nED 750\nED 500\nED 1000\n\n5 10 15 20 25\nSentence length\nAs expected, all models suffer a drop in content accuracy on longer sentences. The degradation is\nroughly linear in the sentence length. For the encoder-decoder, models with fewer dimensions seem\nto degrade slower.\nDim. Length Word content | Word order\n100 vs. 300 0.0 0.0 1.5e-33\n300 vs. 500 | 1.47e-215 0.0 3.06\u00a2e-64\n500 vs. 750 0.68 0.032 0.05\n\n750 vs. 1000 | 4.44e-32 0.3 0.08\nTable 4: P-values for CBOW models over the different dimensions and tasks"}]
HyFkG45gl
[{"section_index": "0", "section_name": "MACHINE SOLVER FOR PHYSICS WORD PROBLEMS", "section_text": "Megan Leszczynski & Jos\u00e9 Moreira\nIBM T.J. Watson Research Center\nYorktown Heights. NY 10598 USA.\nWe build a machine solver for word problems on the physics of a free falling object\nunder constant acceleration of gravity. Each problem consists of a formulation\npart, describing the setting, and a question part asking for the value of an unknown.\nOur solver consists of two long short-term memory recurrent neural networks and\na numerical integrator. The first neural network (the labeler) labels each word\nof the problem, identifying the physical parameters and the question part of the\nproblem. The second neural network (the classifier) identifies what is being asked\nin the question. Using the information extracted by both networks, the numerical\nintegrator computes the solution. We observe that the classifier is resilient to errors\nmade by the labeler, which does a better job of identifying the physics parameters\nthan the question. Training, validation and test sets of problems are generated\nfrom a grammar, with validation and test problems structurally different from the\ntraining problems. The overall accuracy of the solver on the test cases is 99.8%."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "We present a complete system architecture for a machine solver that automatically solves a clas:\nof physics word problems, namely classical mechanics of a point particle in free fall. This domair\nallows us to formulate one dynamical system to which all the physics problems in this domain car\nbe mapped. The dynamical system describes how the state of the particle, defined by its locatior\nand velocity, changes over time. Correspondingly, the initial conditions for the dynamical systerr\ninclude the location and velocity of the particle at the time origin.\nGiven the word problem as input, the solver must first learn to extract the parameters needed t\nproduce the dynamical system and also learn to identify the type of question. Two independentl:\ntrained recurrent neural networks are used to complete these tasks. The first neural network, referre:\nto as the labeler, learns to find the dynamical system parameters and locate the question within th\nproblem statement. The second neural network, referred to as the classifier, identifies the type o\nquestion. Finally, the solver uses a numerical integrator to solve the dynamical system and produc\nthe solution. We use a problem generator in order to produce disjoint datasets as input to the sys\ntem for training and testing. The generator produces short-answer high school-level physics wor\nproblems with mixed units.\nAutomatically solving word problems has been a research interest of the natural language process-\ning community for some time, particularly with math word problems. The main challenge is to\ndevelop a semantic representation of the word problem. learned to represent\nmathematical word problem with a system of equations, by aligning words in the word problem\nto templates. While their technique learns to induce multiple templates and assumes knowledge of\nnumbers and nouns, we assume no knowledge of the words in the text but only map to one template."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "After a brief related work section, we provide a more detailed description of the class of physics\nproblems we address. We proceed to describe how the machine solver works and present exper-\nimental results. We conclude with a summary of our work and proposals for future works. The\nappendices contain additional details that did not fit in the body of the paper.\nAnother study to solve math word problems was done by (2014). This study alsc\nassumes the ability to identify numbers and nouns in the text and uses a dependency parser t\ndetermine relationships between words in the text. Like the other study, this approach generalizes t\u00ab\nmath word problems that require different equations. (2015) similarly used a parser to solve\nmath word problems. However, their parser maps the word problems to a carefully defined languags\nthey created called DOL, from which equations can be derived. Rather than use a parser to breal\ndown the word problems, we use neural networks to learn to identify key pieces of information. Ou\nstudy is the first of our knowledge to apply recurrent neural networks to the task of solving wor\nproblems.\nWe chose to use recurrent neural networks (RNN) for the labeler and the classifier as both of their\ninputs consist of sequences of words. Recurrent neural networks are commonly used to process se-\nquences, and as a result have found application in natural language processing tasks such as machine\ntranslation (Cho et al.|{2014b) and speech recognition (Graves et al.||2013). After experimenting\nwith different models, we obtained the most success with Long Short-Term Memory (LSTM) vari-\nants of RNNs. For additional discussion on RNNs in general, and LSTMs in particular, we refer the\nreader to AppendixJA]\nWe consider the following class of physical systems (see Figure]1{a)): In a two-dimensional space\nwith gravity producing a downward constant acceleration g, there is one particle in free fall. That is\nno forces other than gravity are acting on the particle. Movement of the particle starts at time t = 0\nwith an initial position defined by displacements d and dy and initial velocity with components v;\nand v3.\nThe time behavior of the particle can be represented by the dynamical system shown in Figure[I{b)\nThe state vector #(t) = [21(t), r2(t), #1(t), \u00a3(t)|? consists of two positions and two velocities\nand its derivative depends il on itself and the acceleration of gravity, as shown in the figure.\nCombined with the initial condition 7(0) = [d1,d2, v1, v2\", the differential equation produces 2\nunique solution.\nFigure 1: Physics domain (a): We consider a two-dimensional space with a free falling particle.\nDisplacements d; and d2 define the initial position of the particle, while v; and v2 define its initial\nvelocity. Gravity produces a constant acceleration g pointing straight down. The behavior of the\nparticle is defined by the dynamical system shown in (b)."}, {"section_index": "3", "section_name": "4 MACHINE SOLVER", "section_text": "In this section we describe the machine solver, which is composed of two recurrent neural network:\nand the numerical integrator. The top-level system block diagram is shown in Figure]2]\na,\n\noooco\nOur machine solver computes answers to word problems in the domain just described. The word\nproblem must specify, sometimes indirectly, the five parameters of the dynamical system (d1, do,\nV1, V2, and g). It must also include a question that can be answered by computing the time behavior\nof the system. We discuss how our machine solver works in the next section.\nTRANSLATE SOLVE\n\nDynamical\nSystem Solution\nx = Ax+Bu\n\nWord\nproblem\nFigure 2: The first step from word problem to dynamical system is accomplished via neural net-\nworks. The second step from dynamical system to solution is achieved with a numerical integrator.\nThe data flow through the labeler and classifier neural networks is shown in Figure |3| We usec\nTensorFlow\u2122||to develop the neural network models for both labeler and the classifier. TensorFlow\nis an open source library from Google that allowed us to easily explore different models and training\n\nsettings with already implemented RNN cells and optimizers (Abadi et al.||2015). We quickly\n\nexperiment with the provided optimizers to find the optimal optimizer for each network.\nq Question\nWord Word Dynamical\nproblem Label System\n\nParameters of\nDynamical System\nLet the acceleration of gravity be 32 ft/s? .. How far. ?\n\nproblem\n\nlabel\nOo 8600 oO oO ie) ie) G A_UNIT QUEST QUEST QUEST\nLet the acceleration of gravity be 32 ft/s? .. How far. ?\n\nproblem\n\nlabel\nOo 8600 oO oO ie) ie) G A_UNIT QUEST QUEST QUEST\nWord Word\nproblem Label\nFigure 4: Example of input to labeler with expected output. A label is associated with each word,\nwhere O indicates other, or a word not needed for the dynamical system translation. Input text is\nshortened for the example.\nThe chosen RNN model is one that produces an output at each time step and has recurrent connection\nbetween hidden units, as described by|Goodfellow et al.|(2016) in Chapter 10, Figure 10.3. At each\nstep of the input sequence, the RNN receives a word embedding and outputs a label for the word.\nThe label that is outputted at each time step can fall into one of the ten categories shown in Table]\nIn addition to tagging words for their relevancy to the dynamical system formulation, we tag the\nquestion part of the word problem to pass to the classifier.\nWe use three measures to assess the performance of the labeler: label accuracy, question accuracy\nand overall accuracy. Label accuracy is measured as having matching labels in the predicted anc\nexpected (generated) labels, not including the question part of the word problem. Question accuracy\nis measured as having both the first word of the question and the last word of the question labelec\ncorrectly, as label-based post processing to extract the question relies only on these indices. Overall\naccuracy is measured as meeting both of the label and question accuracy criteria.\n'TensorFlow is a trademark of Google Inc\nq Question\nWord Word Dynamical\nproblem Label System\n\n{Parameters of\nDynamical System\nFigure 3: The word problem passes through two RNNs to be transformed into the dynamical system\nform.\nThe labeler is an LSTM network with one hidden layer of ten units. Figure |4|shows an example of\nthe data flow through the labeler. The input to the labeler is the full problem statement and the output\nis a label for each word. The words are input into the labeler via an embedding that is randomly\ninitialized and trained simultaneously with the weights and biases. The weights are also randomly\ninitialized and the biases are initialized to zero. To limit the exploration of the parameter space, we\nset the dimension of the embedding to equal the number of hidden units.\nTable 1: Possible output word labels and corresponding dynamical system parameters.\nLABEL\n\nDESCRIPTION\n\nQUEST\nG\nA_UNIT\nD_UNIT\nHEIGHT\nV_UNIT\nVv\nTHETA\nSTORY\nfe)\n\nQuestion\n\nValue for gravity\n\nUnit for acceleration (gravity)\n\nUnit for initial height\n\nInitial height value or height of each story\nUnit for velocity\n\nInitial velocity magnitude\n\nAngle of initial movement\n\nValue for number of stories (if applicable)\nOther\n\ng\n\ng\n\ndy\n\ndz\nU1, V2\nU1, V2\nU1, V2\ndz\nWe train the labeler with TensorFlow\u2019s Adam Optimizer, an initial learning rate of 0.1, and a mini-\nbatch size of 100 word problems. The Adam Optimizer uses adaptive learning rates and is par-\nticularly effective with sparse gradients 2014). We use early stopping based on a\nvalidation accuracy or when the training accuracy stops improving. We chose the network architec-\nture and training settings after performing a limited grid search across the number of layers, number\nof units per a layer, and learning rate. (See Appendix|B})\nAfter the labeler assigns a label to each word, a post processing step maps the labels to the dynamical\nsystem parameters, converting the initial conditions and value of gravity to SI units if necessary.\nThe classifier is an LSTM network with one hidden layer of 1,000 units. An example of the data\nflow through the classifier is shown in Figure [5] For the problems in our dataset, the formulation\npart of the word problem does not provide information necessary to classify the type of question.\nMoreover, as sequences become longer, the performance of RNNs tend to decrease\n\n. Armed with these two observations, we chose to only have the question part of the wor\nproblem as the sequence to input into the classifier.\nHow far has the (x1 + x2 = 0)\nrock traveled when it\nstrikes the ground?\nAs with the labeler, we encode the words of the sequence into word embeddings, matching the\ndimension of the word embedding to the number of hidden units, and training them with the weights\nand biases. In this case, a sequence would be one question. Unlike the labeler, there is only one\noutput for each sequence, occurring on the last step of the sequence. For more information see\n\nChapter 10, figure 10.5 of |Goodfellow et al.| (2016) for an illustration. The singular output is the\n\ntype of question, which can fall into one of the nine types shown in Table]"}, {"section_index": "4", "section_name": "4.2 NUMERICAL INTEGRATOR", "section_text": "How far has the (x1 + x2 = 0)\nrock traveled when it\ntrikes the around?\nFigure 5: Example of input to classifier with expected output. Symbol x, refers to horizontal dis-\nplacement and symbol x2 refers to vertical displacement.\nThe classifier is trained with TensorFlow\u2019s Gradient Descent Optimizer, an initial learning rate of\n0.5, and a mini-batch size of 100 questions. As with the labeler, we performed a grid search to\nchoose these hyperparameters. (See Appendix{B])\nThe numerical integrator computes the evolution over time of the dynamical system shown in Fig-\nure[i{b). As input it receives the initial conditions, the value of g, and the type of question extracted\nfrom the labeler and the classifier. Using SciPy\u2019s ordinary differential equation integrator, a table\nof values representing the system\u2019s state to the point that the object hits the ground is iteratively\nconstructed. The numerical solution is refined to a precision of 0.001 (one part in a thousand), based\non the type of the question. For example, if the question is about the maximum height, we produce\nTable 2: Possible Output Question Types\nFigure 6: Outputs from the labeler and the classifier feed into the numerical integrator, where the\nlabeler outputs form the dynamical system to integrate and the classifier outputs control the focus\nand output of the integrator."}, {"section_index": "5", "section_name": "4.3. TRAINING, VALIDATION, AND TEST SETS", "section_text": "The grammar also ensures that the training set is disjoint from the validation and test sets, partic-\nularly in structure. Examples of generated problems are shown below in Figure [7] This is vital i\nassessing the ability of the trained networks to generalize.\na first instance of the table, find the maximum height in that table, and then search for the maximum\naround that value with increased precision, repeating until we reach the desired precision. Finally,\nthe question type is used to determine which value from the table to output from the solver. This\ndata flow is shown in Figure|6]\nWe define the word problems with a grammar that is provided in the APPENDIX. The word problems\nin the training, validation, and test sets are exclusively made up of problems that follow the specifi-\ncations laid out by the grammar. The grammar allows for mixed units, meaning that within the same\nproblem, the height may have a metric unit, while the velocity may have a U.S. customary unit. The\ngrammar also permits the initial conditions to be exposed in multiple ways. For instance, a thete\nvalue and speed will be provided in some problems, from which the solver would need to calculate\nthe initial vertical velocity using the theta, whereas in other problems no theta value may be pro-\nvided. Using mixed units and varying numbers of values to provide information about each initial\ncondition allows us to increase the complexity of the problems within the scope of the dynamical\nsystem.\nWe implement the grammar in Python. When a new problem is instantiated, the grammar rules are\ndescended to build up the problem, making random choices when choices are available. Labels for\neach problem are also automatically generated. The complete generative model is shown in Figure[8}\nBy using a problem generator to build our datasets, we are also free to choose the size of the dataset.\nOur problem generator is capable of generating ~26,000 different training problems and ~22,000\ndifferent test and validation problems.\nAssume the acceleration due to gravity is 85 ft/s\u201d. A ping pong ball is dropped from the top of a\nstory building, where each story is 89 m. What is the maximum speed the ping pong ball obtains\nA chair is launched at a speed of 51 mph and an angle from the horizontal of 28 degrees. Let the\nacceleration due to gravity on Planet Watson be 98 m/s\u201d. How much time has passed when it\nreaches its maximum height?\nFigure 7: Examples of generated problems that adhere to the grammat\nFigure 8: The generative model allows us to generate the input and output for the neural network:\nwithout requiring any manual annotation.\nTraining Accuracy (6)\n\nLabeler Accuracy (%) on Training Data\n\n\u2018Training Accuracy ()\n\n020203040506070809 1 111213141516171819 2\nEpoch\n\novers! tetas Question\n\noa\n\nClassifier Accuracy (24) on Training Data\n\n02 03 04 0S 06 07 08 09 1\nEpoch\n\naa\n\n12\nThe training accuracy for the label, question, and overall reach 100% for all by the end of the firs\nepoch. The classifier also reaches 100% accuracy on the training set by the end of the first epoch\nThe epoch is broken down into fractions as the training accuracy is evaluated every seven mini\nbatches of 100 problems.\nThe accuracy on the test set after the labeler and classifier have been independently trained are shown\nin Table[3| The accuracy of the combined RNN system amounts to an overall accuracy of 99.8%.\nThe labeler achieves 100% accuracy on predicting the non-question labels and incurs a small error on\npredicting the beginning and end of the question. As a result, the question that is extracted based on\nthe labeler\u2019s predictions does not always match the true question. However, based on the classifier\u2019s\naccuracy of 99.8%, the classifier is often resilient to the errors that labeler makes in extracting the\nINPUT\n\nOUTPUT\n\nINPUT\n\nOUTPUT\nOUTPUT\nThe datasets consisted of 7,000 word problems for training, 2,000 word problems for validation, and\n1,000 word problems for test. The progress of training over time is shown in Figure J] As can be\nseen in the left graph, the labeler learns to identify the beginning and end of the question faster than\nit learns to correctly predict the labels. The overall accuracy of the labeler is both limited by and\nequivalent to that of the label accuracy. With this particular model of the labeler, there is no problem\nfor which the labeler correctly predicts the non-question labels, but incorrectly locates the question.\nFigure 9: Training accuracy of labeler (left) and classifier (right)\nquestion. While the labeler incorrectly extracts ninety-one questions, the classifier only incorrectly\nclassifies two questions from a test set of 1,000 word problems. Figure [I2]in Appendix [C] shows\nexamples of the labeler\u2019s errors and how the classifier handles them.\nWe note that for the two wrongly classified cases, both shown in Figure[I2} the classification error is\nthe same. That is, a question that should be about the speed of the object when it hits the ground i:\nclassified as a question about the maximum speed the object reaches. The numerical answer to the\nproblem is the same for both classes of question. Therefore, even in the case of wrongly classifiec\nquestions, the system produces the right answer.\nThe high accuracy of the labeler and classifier are not a total surprise. LSTMs have been shown to be\nvery effective in learning context-free and even context-sensitive languages (Gers & Schmidhuber\n2001} Cleeremans et al. 1989} Rodriguez] 2001), including the ability to generalize and recogniz\nstructures not seen before. Our training, validation and test sets are from a regular language, a:\ndescribed in Appendix [E} so an LSTM should do well in learning them. In fact, we have seet\nsituations (with the test, validation and test sets all with distinct structures) where the labeler anc\nclassifier both achieve perfect accuracy on all test problems. We decided to include the data on the\n\u201cnot so perfect\u201d case because it illustrates some important points (Figure/I2).\nTable 3: Accuracies shown are on the test set of word problems for the system. The classifier is fe\nthe extracted questions as identified by the labeler. The combined RNN system accuracy is base\non the final output of the system having the same dynamical system parameters and question type a\nthe generated output for a word problem.\nThe trained variables for both models consist of word embeddings for input to the RNN, and weights\nand biases within the RNN and from the RNN to the final output. We focus our evaluation on the\nRNN weights, as we believe these are more specific to the our physics problem solver. For an\nevaluation of the word embeddings, please see Appendix |D]\nThe distributions of weights for the labeler and classifier are shown in figures(10] As the labeler wa:\nan LSTM network, there are weights from the input and the previous hidden values to input, forget\nand an output gates, as well as to the memory cells. While there appears to be a high concentratior\nof negative weights to the output gate and positive weights to the input gate, this is likely a resul\nof random initialization of the weights as this pattern was not consistently found with other random\ninitializations. The output weights, which go from the output of the LSTM cell\u2019s hidden units tc\nthe target labels, have a slightly wider range. The few number of zero weights indicates that the\nmajority outputs from the hidden units of the LSTM cell contribute to making the final prediction o!\nthe label.\nThe LSTM weight distribution for the classifier is more uniform and compressed than that of the\nlabeler. We believe this is due to the great increase in parameters since the classifier has 1,000-\ndimensional embeddings and 1,000 hidden units, leading to 8 million weights\n(2015). We predict that each piece of information captured by the trained embeddings and hidden\nunits makes a less significant contribution to the final prediction than with the labeler, as indicated by\nthe classifier\u2019s smaller weight values. The range of the output values for the output weights similarly\ncontributes to this prediction, with a very small range of weights which are mostly concentrated\naround zero.\nAfter examining the general distribution of weights, we also wanted to explore potential patterns\nof specific weights. We chose to explore the heat map of the weights for labeler since there are <\nmagnitude fewer connections, allowing the patterns to be more readily examined. We include the\nheat map of the weight matrices for the connections between the hidden units of the labeler to the\noutput predictions in Figure[I] Looking at the heat map, hidden units 3 and 8 seem to have a similat\nweight distribution across the output categories. We also see seemingly logical pairs forming, such\nFigure 10: Top left: labeler LSTM weight distributions. Top right: classifier LSTM weight distri-\nbutions. Bottom left: labeler output weight distributions. Bottom right: classifier output weight\ndistributions.\nLabeler Output Weight Heat Map\n\nAUNT DUNT WEIGHT = VLUNIT v THETA STORY\n\n\u2018Output Category\nFigure 11: Heat map for labeler weights from LSTM hidden layer to output layer."}, {"section_index": "6", "section_name": "6 CONCLUSIONS", "section_text": "We have developed a machine solver for a word problems on the physics of a free falling object in\ntwo-dimensional space with constant acceleration of gravity. The solver has three main components.\nThe labeler labels each word of the problem to identify the parameters of a canonical dynamical\nsystem that describes the time evolution of the object, and the part of the problem that corresponds\nto the question being asked. The classifier classifies the question part. Finally, an integrator is used\nto solve the dynamical system, producing a numerical answer to the problem.\nA grammar-based generator is used to produce the training, validation and test set of problems for the\nneural networks. The grammar is specified so that the validation and test problems are structurally\ndifferent from the training problems. We use a total of 10,000 generated problems, partitioned into\n7,000 for training, 2,000 for validation and 1,000 for testing.\nWhen measured against the test set of 1,000 problems, the dynamical system parameters are cor-\nrectly identified in all of them. The question part is precisely identified in 909 cases, but because\nIngut gate\n\nForget gate\n\npe gte wets\nOutput gate\n\nForget gote weights\nCelt input\n\nutp ate wets\n\na\nelinpt weights\n\noutput weights\n\nE ioo Psy\npe feo\nNaF -0.02 800 G02 008 fos 002 000 002 008\n\n109\n\ns00|\n\nrrequency\n\nutp gate weights\n\nCellinpt wets\n\n\u2018output weights\nas the strong positive weights associated with D_ UNIT and HEIGHT for hidden unit 6 and for V\nand THETA for hidden unit 0. However, there are also features that are challenging to explain,\nsuch as the strong positive contribution hidden unit 4 makes to predicting THETA while making an\nequally strong negative contribution to predicting STORY.\nthe classifier can work with partial questions, in the end all but 2 questions are classified correctly\nTherefore, the combined accuracy of the two neural networks, for the purpose of solving the physic:\nproblems, is 99.8%.\nThere are several opportunities for future work. First, we would like to investigate more deeply how\nour neural networks work. In particular, what features of the word problem they are identifying anc\nhow specific units are responsible for that identification. Second, we could extend our solver by con:\nsidering more complex physical situations, including additional forces, three-dimensional motion\nmultiple objects, and so on. We would have to extend our canonical dynamical system to represen\nthose situations and/or use a collection of dynamical systems. We expect that the complexity of the\nneural networks and the training/validation/test sets will grow accordingly. Finally, the more am:\nbitious goal would be to remove the canonical dynamical system(s) and train the networks to builc\ntheir own. We believe this would be closer to the way humans solve these physics problems."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Martin Abadi et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems\n2015. Software available from http://tensorflow.org.\nAlex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-\nrent neural networks. In 2013 [EEE international conference on acoustics, speech and signal\nprocessing, pp. 6645-6649. IEEE, 2013.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):\n1735-1780. November 1997. ISSN 0899-7667.\nMohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to\nsolve arithmetic word problems with verb categorization. In EMNLP., pp. 523-533. 2014.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings\nof the 3rd International Conference on Learning Representations (ICLR), 2014.\nNate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. Learning to automatically solve\nalgebra word problems. 2014.\nKyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and\nYoshua Bengio. Learning phrase representations using rmn encoder-decoder for statistical ma-\nchine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP\n2014), 2014b.\nan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MIT\nPress. Book available from http://www.deeplearningbook.org, 2016.\nShuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. Automatically solvins\nnumber word problems by semantic parsing and reasoning. In EMNLP, 2015.\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. The Journal of Machine\nLearning Research, 9(2579-2605):85, 2008."}, {"section_index": "8", "section_name": ". RECURRENT NEURAL NETWORKS", "section_text": "The labeler and classifier are both recurrent neural networks (RNNs). We provide background in\nformation on RNNs in this section, followed by an overview of Long Short-Term Memory (LSTM\nnetworks, which are an advanced type of RNNs and were used to build our networks. A recurren\nneural network receives the previous values of the hidden layer as input in addition to the curren\ninput values into the network. Thus each hidden unit retains information about the history of th\n\nsequence. As explained in|Goodfellow et al.|(2016), the fundamental behavior of recurrent neura\n\nnetworks can be captured in the following equation:\nAY = final) 2c: 6),\nwhere hh\u2018) represents the state of the RNN unit at time \u00a2, x\u201c represents the current input, and 6\nrepresents the weights and biases. The function f is usually hyperbolic tangent\n2015). It is important to note that the weights and biases are reused across time. Thus, while ar\nRNN with one hidden layer can be unfolded in time to having many layers, the weights and biases\nbetween each of the unfolded layers are shared.\nGowns\n\nsigm\nsigm\nsigm\ntanh\nSots\n\nSisttt\n\n_ | sigm h\n= | sigm Tonan (i\n\ntanh\nLipa sn aA\n= fOg1~tioj\n\nhi, = 0 \u00a9 tanh(c})\n\nl-1\nt\nL\nt-1\n\n)\nA limitation of the basic recurrent neural network described above is that it cannot retain information\nover long sequences. If a key piece of information for predicting an output at the end of a long\nsequence occurs at the very beginning of the sequence, the basic recurrent neural network will likely\nfail as a result of training difficulties. A popular solution for this limitation is the Long Short-Term\nMemory (LSTM) - essentially a highly capable, more complex type of recurrent neural network\n(1997). An LSTM is composed of a memory cell, and input, output,\nind forget gates that determine how to modify and reveal the contents of memory cell. Each of these\nzates has its own set of weights and biases that are connected to the inputs. Therefore the number\nof weights within a layer of an LSTM is quadrupled from that of a basic recurrent neural network\n\u20180 2n x 4n, where n is the number of hidden units in the layer and assumes each layer has the same\n1umber of units. 2n is from the input being a concatenation of the output from the previous hidden\nayer (in time) with the current input, as occurs for all RNNs, and the 4n is for the connections to\n2ach of the three gates as well as to the memory cell input. More specifically, the equations for the\n\nLSTM are as follows (Graves}|2013); (Zaremba et al.]/2014):\nAs both of our neural network models have only one hidden layer, hi\u2018 merely refers to the current\ninput. T>,47, refers to the weight and bias transformation W \u00ab+ applied to the concatenated hidden\nlayer inputs. The hyperbolic tangent and sigmoid functions are applied element-wise. The variables\ni, f, o, and 7 refer to the input gate, forget gate, output gate, and cell input, respectively.\nAnother potential solution to the inability of the basic recurrent neural network to capture long-term\ndependencies is the Gated Recurrent Unit (GRU) (Cho et al.| 2014a), however, we had the mos\nsuccess with the LSTM for our specific labeler and classifier tasks."}, {"section_index": "9", "section_name": "B CHOOSING THE RIGHT RNN CONFIGURATION", "section_text": "We selected the models for our RNNs by performing a grid search over the learning rate, the number\nof units, and the number of layers. The results of the grid search for the the labeler recurrent network\nare shown in Table |4Jand the results for the classifier network are shown in Table[5] For each RNN.\nwe chose the most efficient model, in that it requires the least space and obtains the greatest accuracy\nwith the lowest training time.\nInterestingly, for the classifier, we see that models with two or three layers and lower learning rates\nachieve an equivalent accuracy as the one-layer model. However, they are inferior to the one layer\nmodel in that the multi-layer models require more space and usually require longer to train.\nTable 4: The chosen RNN network for the labeler has one layer of ten units with a learning rate\nof 0.1. The notation x/y/z means x for overall accuracy, y for label accuracy, and z for questior\n\naccuracy, where accuracy is given as a proportion of correct predictions over total predictions. Al\nresults shown use TensorFlow\u2019s Adam Optimizer and LSTM cell.\nLearning Rate\n\nLayers | Units 0.01 0.1 0.5\n\n1 10 | 0.197/1.000/0.197 0.911/1.000/0.911 \u20140.001/0.110/0.032\n100 | 0.850/1.000/0.850 \u2014 0.763/0.932/0.814 \u2014 0.196/0.207/0.587\n\n1000 | 0.048/0.281/0.525 _0.882/0.907/0.955 _0.225/0.230/0.975\n\n2 10 | 0.000/0.000/0.000 \u2014 0.037/0.099/0.048 \u2014 0.005/0.009/0.354\n100 | 0.096/0.337/0.096 \u2014 0.000/0.000/0.000 \u2014 0.000/0.000/0.000\n\n1000 | 0.000/0.000/0.000 _0.000/0.000/0.000 _ 0.000/0.000/0.000\n\n3 10 | 0.000/0.000/0.015 \u2014 0.021/0.132/0.059 \u2014_0.000/0.000/0.000\n100 | 0.076/0.442/0.091 \u2014 0.000/0.000/0.000 \u2014 0.000/0.000/0.000\n\n1000 | 0.000/0.000/0.000 \u20140.000/0.000/0.000 \u2014 0.000/0.000/0.000\nTable 5: The chosen network for the labeler has one layer of 1,000 units. The values shown are\naccuracies given as a proportion of the number of correctly predicted classifications over total clas-\nsifications. All results use TensorFlow\u2019s Gradient Descent Optimizer and LSTM cell.\nThis section is included to illustrate examples of the the labeler network incorrectly extracting the\nquestion. In each of these cases, the classifier receives as input the labeler\u2019s incorrect output. The\nclassifier\u2019s handling of these errors is shown in Figure[12]\n(1) Labeler input: Let the acceleration due to gravity on Planet Watson be 65 ft/s*2. A ping pong\nball is released from the top of a 3 story building, where each story is 79 m. What is the maximum\nspeed the ping pong ball obtains?\n(2) Labeler input:Assume the acceleration due to gravity is 49 m/s*2. A ping pong ball is\nlaunched at a speed of 35 m/s and an elevation of 88 degrees. What is the magnitude of the veloci\nof the ping pong ball just before it touches the ground?\nLabeler output / classifier input: What is the magnitude of the velocity of the\nClassifier output: (speed : max)\nExpected output: (speed : x2=0)\n(3) Labeler input:Let the acceleration due to gravity on Planet Watson be 71 ft/s*2. A ping pon\nball is thrown at a speed of 53 mph and an elevation of 52 degrees. What is the magnitude of the\nvelocity of the ping pong ball just before it touches the ground?\nLabeler output / classifier input: What is the magnitude of the velocity of the\nClassifier output: (speed : max)\nExpected output: (speed : 75=0)\nFigure 12: Examples of incorrectly extracted questions from the labeler and the classifier\u2019s response\nto them. In all three cases, the question is cut short. The classifier still makes the correct the\nclassification for the first case, but fails for the second and third cases."}, {"section_index": "10", "section_name": "D WORD EMBEDDINGS", "section_text": "To input the words into both RNNs, the words were first encoded as word embeddings. Word embed-\ndings map words to a multi-dimensional space, providing the words with numerical representations\nwhich expose relationships between words. The final embeddings for the labeler network are 10-\ndimensional, and the embeddings for the classifier network are 1,000-dimensional. Rather than use\nWord2Vec, we chose to train the embeddings simultaneously with the weights and biases. We were\ninterested in seeing if embeddings trained for a particular task could capture intuitive word features,\n\nas can often be seen with embeddings trained with Word2Vec (Mikolov et al.][2013).\nIn order to explore the results of the trained embeddings, we used scikit-learn\u2019s implementation of |\nSNE to map the high-dimensional embeddings down to two dimensions (van der Maaten & Hintor\n{2008}. The results from t-SNE are shown in Figure[I3] Words appear exactly as they appear in t\n\nword problem, and no stemmers are used.\nThe embeddings from the labeler network seem more intuitive, as numbers and similar units, such as\n\u201cm/s\u201d, \u201cmph\u201d, and \u201cft/s\u201d, are mapped to similar regions. We had hypothesized that the embedding\nmay capture some word function related to the task the embeddings were being trained to perform\nHowever, the objects seem to be distributed throughout the space and have no easily distinguishable\npattern, despite playing a similar functional role in each word problem. It is even more difficult tc\ndiscern any patterns from the embeddings from the classifier network. We do see that words suck\n\nas \u201ctraveling\u201d, \u201ctraveled\u201d, and \u201ctravels\u201d map near each other, as well as question words \u201cWhat\u201d anc\n\u201cHow\u201d. We predict that the limited vocabulary in the question space of only forty words may con-\ntribute to these more perplexing results by reducing the effectiveness of which t-SNE can determine\nthe similarity between words.\n1109} a angle\n5 @\nthera\nBag Pty\net aarp t\n- FT acd uy\na\nbape ates,\nprt iategonp\n| Pee\n_ oe\narty\nvenga\nae on\n=\nx tang\nweep 7\nFigure 13: Top: The embeddings from the labeler network for the top 100 most frequent words\nin the word problems. Bottom: The embeddings from the classifier network for all words in the\nquestions."}, {"section_index": "11", "section_name": "4 WORD PROBLEM GRAMMAR", "section_text": "Notation: \u201cobject\u201d is used as a parameter in order to enforce consistency between parts of the prob-\nlem. Within a word problem, the same object must appear wherever an object symbol occurs. As\nused in the question part of the grammar, \u201cx1\u201d indicates horizontal displacement and \u201cx2\u201d indicate:\nvertical displacement. When used with numbers, \u201c...\u201d indicates the sequence of numbers continue:\nin between the bars.\neee ee ee\n\n1[2/3]..[89\n(stat_verb) from (location)\n\nreleased | dropped | let go\n\nin\n(training object)\n\n(val_test_object)\n\n(formulation(object))\n\ngolf ball |stone|chair| feather | soccer ball| rock\n| cannonball\n\npebble | ping pong ball | vacuum | tennis ball |\nbasketball | hat\n\n(training _formulation(object)) | (val_test_formulation(object))\n\n(training _formulation(object)) ::= A object is (action). (Assumption).\n\n(val_test_formulation(object)) ::= (Assumption). A object is (action).\n\nassumption\nIp\n\nacceleration)\naccel_value)\n\naccel_unit)\n\nmoving)\n\n(\n(\n(\n(\n(\n(descent)\n\n(projectile)\n\n(proj_verb)\nspeed)\nspeed_value)\n\nspeed _unit)\n\nangle)\n\nstationary)\n\n(\n(\n(\n(angle_word)\n(\n(\n(\n\nstat_verb)\n\nLet the acceleration due to gravity on Planet Watso\n(acceleration) .\n\nAssume the acceleration due to gravity is\n(acceleration) .\n\n(accel_value) (accel_unit)\n\n1|2|3|..|100\n\nm/s? | ft/s?\n\n(moving) | (stationary)\n\n(descent) | (projectile)\n\ndescending at a speed of (speed)\nmoving downwards at a speed of (speed)\n\n(proj_verb) at a speed of (speed) and an (angle_word) of\n(angle) degrees\n\nthrown | fired| launched\n\n(speed_value) (speed_unit)\n\nof1|2|..| 99\n\nm/s | ft/s | mph\n\nelevation | angle fromthe horizontal\n1[2/3]..[89\n\n(stat_verb) from (location)\n\nreleased | dropped | let go\nm/s | ft/s | mph\nelevation | angle fromthe horizontal\n(stat_verb) from (location)\n\nreleased | dropped | let go\n(training _max_x2(object)\n\nWhat is the maximum height the object reaches\u2019\nWhenever the grammar dictates a choice of construct (for example, when selecting the object of a\nword problem), a uniform random number generator is used to select one of the valid constructs.\nTherefore, the frequency of a particular form in the training, validation and test sets ultimately\nTable 6: Occurrence counts for different objects in word problems\nTable 7: Occurrence counts for different question types\ndepend on how many random choices are necessary to produce that form and how many variation:\nthere are in each choice.\nTable[6jillustrates the simple case of occurrence counts of the different objects in our word problems.\nThe training set uses seven different objects, while the validation and test sets use six objects. Not\nsurprisingly, each object in the training set appears in approximately 1/7 of the total number ot\n\nproblems in that set. Meanwhile, each object in the validation and test sets appears in approximately\n1/5 of the total number of problems in those sets.\nA more interesting situation is illuatrated in Table [7] for the occurrence counts of question types.\nAs shown in Table}2} there are nine different question types. However, the grammar works by first\nchoosing one of two groups of questions: either max-type questions (the first three in Table [2) or\nconditional-type questions (the last six in Table Within each group, there is equal probability\nfor each question type. Consequently, as Table|7/shows, each of the max-type questions is approxi-\nmately twice as common as each of the conditional-type questions.\n(a) training set\n\nobject\n\ngolf ball\nstone\n\nchair\nfeather\nsoccer ball\nrock\ncannonball\n\ncount\n1052\n1007\n987\n1020\n965\n989\n980\n\n(b) validation set\n\n(c) test set\n\nobject count object count\npebble 336 pebble 156\nping pong ball 342 ping pong ball 159\nvacuum 316 vacuum 165\ntennis ball 355 tennis ball 163\nbasketball 325 basketball 178\nhat 326 hat 179\n(a) training set\n\n(b) validation set\n\n(c) test set\n\nclass count class count class count\n(x1 : max) 1163 (a1 : max) 326 (x1 : max) 168\n(speed : max) 1157 (speed : max) 349 (speed : max) 180\n(x2 : max) 1120 (a2 : max) 325 (x2 : max) 166\n(speed : max height) 610 (speed : max height) 160 (speed : max height) 64\n(time : max height) 602 (time : max height) 158 (time: : max height) 92\n(x1 : %Q=0) 598 (a1 2 ZQ=0) 160 (a1: 88\n(time : x2=0) 596 (time : x2=0) 194 (time \u00bb 75\n(speed : 2=0) 585 (speed : x2=0) 180 (speed : t=) 77\n(x, : max height) 569 (x : max height) 148 (x, : max height) 90"}]
HJTzHtqee
[{"section_index": "0", "section_name": "A COMPARE-AGGREGATE MODEL FOR MATCHING\nTEXT SEQUENCES", "section_text": "Shuohang Wang\nSchool of Information Systems\nSingapore Management University\nshwang.2014@phdis.smu.edu.sg"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many natural language processing problems involve matching two or more sequences to make a\ndecision. For example, in textual entailment, one needs to determine whether a hypothesis sentence\ncan be inferred from a premise sentence (2015). In machine comprehension, given\na passage, a question needs to be matched against it in order to find the correct answer (Richardson\net al.| 2013} Tapaswi et al.| 2016). Table[I] gives two example sequence matching problems. In the\nfirst example, a passage, a question and four candidate answers are given. We can see that to get\nthe correct answer, we need to match the question against the passage and identify the last sentence\nto be the answer-bearing sentence. In the second example, given a question and a set of candidate\nanswers, we need to find the answer that best matches the question. Because of the fundamental\nimportance of comparing two sequences of text to judge their semantic similarity or relatedness,\nsequence matching has been well studied in natural language processing.\nA common trait of a number of these recent studies on sequence matching problems is the use of ;\n\u201ccompare-aggregate\u201d framework [2016b} [2016} (2016). bi\nsuch a framework, comparison of two sequences is not do yy comparing two vectors each rep\nresenting an entire sequence. Instead, these models first compare vector representations of smalle:\nunits such as words from these sequences and then aggregate these comparison results to make th\n\nfinal decision. For example, the match-LSTM model proposed by|Wang & Jiang for tex\ntual entailment first compares each word in the hypothesis with an attention-weight ion of th\npremise. The comparison results are then aggregated through an LSTM. propose\n\na pairwise word interaction model that first takes each pair of words from two sequences and applie:\na comparison unit on the two words. It then combines the results of these word interactions using :\nsimilarity focus layer followed by a multi-layer CNN. [Parikh et al.|(2016) proposed a decomposabl.\nattention model for textual entailment. in which words from each sequence are compared with ar\nSchool of Information Systems\nSingapore Management University"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Many NLP tasks including machine comprehension, answer selection and text en-\ntailment require the comparison between sequences. Matching the important units\nbetween sequences is a key to solve these problems. In this paper, we present a\ngeneral \u201ccompare-aggregate\u201d framework that performs word-level matching fol-\nlowed by aggregation using Convolutional Neural Networks. We particularly fo-\ncus on the different comparison functions we can use to match two vectors. We\nuse four different datasets to evaluate the model. We find that some simple com-\nparison functions based on element-wise operations can work better than standard\nneural network and neural tensor network.\nWith recent advances of neural network models in natural language processing, a standard practice\nfor sequence modeling now is to encode a sequence of text as an embedding vector using models\nsuch as RNN and CNN. To match two sequences, a straightforward approach is to encode each\nsequence as a vector and then to combine the two vectors to make a decision\n2015} [Feng et al.| . However, it has been found that using a single vector to encode an entire\nsequence is not sufficient to capture all the important information from the sequence, and therefore\nadvanced techniques such as attention mechanisms and memory networks have been applied to\n\nsequence matching problems (Hermann et al} 2015} {Hill et al.| 2016} Rocktaschel et al. 2015).\nPlot: .... Aragorn is crowned King of Gon-\ndor and taking Arwen as his queen before all\npresent at his coronation bowing before Frodo\nand the other Hobbits . The Hobbits return to\nthe Shire where Sam marries Rosie Cotton . ...\nTable 1: The example on the left is a machine comprehension problem from MovieQA, where the\ncorrect answer here is The Shire. The example on the right is an answer selection problem from\nInsuranceQA.\nattention-weighted version of the other sequence to produce a series of comparison vectors. The\ncomparison vectors are then aggregated and fed into a feed forward network for final classification.\nAlthough these studies have shown the effectiveness of such a \u201ccompare-aggregate\u201d framework fo\nsequence matching, there are at least two limitations with these previous studies: (1) Each of the\nmodels proposed in these studies is tested on one or two tasks only, but we hypothesize that thi:\ngeneral framework is effective on many sequence matching problems. There has not been any stud}\nthat empirically verifies this. (2) More importantly, these studies did not pay much attention to the\ncomparison function that is used to compare two small textual units. Usually a standard feedforwarc\nnetwork is used (Hu et al} 2014} Wang & Jiang} 2016b) to combine two vectors representing tw\nunits that need to be compared, e.g., two words. However, based on the nature of these sequence\nmatching problems, we essentially need to measure how semantically similar the two sequence:\nare. Presumably, this property of these sequence matching problems should guide us in choosins\nmore appropriate comparison functions. Indeed|He & Lin| (2016) used cosine similarity, Euclidear\ndistance and dot product to define the comparison function, which seem to be better justifiable. Bu\nthey did not systematically evaluate these similarity or distance functions or compare them with :\nstandard feedforward network.\nIn this paper, we argue that the general \u201ccompare-aggregate\u201d framework is effective for a wide\nrange of sequence matching problems. We present a model that follows this general framework\nand test it on four different datasets, namely, MovieQA, InsuranceQA, WikiQA and SNLI. The first\nthree datasets are for Question Answering, but the setups of the tasks are quite different. The last\ndataset is for textual entailment. More importantly, we systematically present and test six different\ncomparison functions. We find that overall a comparison function based on element-wise subtraction\nand multiplication works the best on the four datasets.\nThe contributions of this work are twofold: (1) Using four different datasets, we show that our mode\nfollowing the \u201c\u201ccompare-aggregate\u201d framework is very effective when compared with the state-of\nthe-art performance on these datasets. (2) We conduct systematic evaluation of different compariso!\nfunctions and show that a comparison function based on element-wise operations, which is no\nwidely used for word-level matching, works the best across the different datasets. We believe tha\nthese findings will be useful for future research on sequence matching problems. We have also mad\nour code available onlin"}, {"section_index": "3", "section_name": "2 METHOD", "section_text": "In this section, we propose a general model following the \u201ccompare-aggregate\u201d framework fot\nmatching two sequences. This general model can be applied to different tasks. We focus our discus-\nsion on six different comparison functions that can be plugged into this general \u201ccompare-aggregate\u201d\nmodel. In particular, we hypothesize that two comparison functions based on element-wise oper-\nations, SUB and MULT, are good middle ground between highly flexible functions using standard\nneural network models and highly restrictive functions based on cosine similarity and/or Euclidean\nQuestion: can i have auto insurance without a\ncar\nGround-truth answer: yes, it be possible have\nauto insurance without own a vehicle. you will\npurchase what be call a name ...\nAnother candidate answer: insurance not be\na tax or merely a legal obligation because auto\ninsurance follow a car...\n{ =\nCNN a\n\n\u2014,\n[\u2014>|\nf\u2014>\n| \u2014+|\nx\ncorr\nx\n\nst\not\ns\nst\na\ns\ncI\nA\n\n(1) NTN\n\nbilinear (NTN) or non-li transformation (NN) } } t\nor cosine similarity or element-wise subtraction |~< ri J\nor etc. between two vectors |\n\na\n\n(3) EucCos\n\n(5) Mult\nFigure 1: The left hand side is an overview of the model. The right hand side shows the details about\nthe different comparison functions. The rectangles in dark represent parameters to be learned. x\nrepresents matrix multiplication.\ndistance. As we will show in the experiment section, these comparison functions based on element\nwise operations can indeed perform very well on a number of sequence matching problems."}, {"section_index": "4", "section_name": "2.1 PROBLEM DEFINITION AND MODEL OVERVIEW", "section_text": "The general setup of the sequence matching problem we consider is the following. We assume there\nare two sequences to be matched. We use two matrices Q \u20ac R\u00a2*\u00ae and A \u20ac R?4 to represent\nthe word embeddings of the two sequences, where Q and A are the lengths of the two sequences.\nrespectively, and d is the dimensionality of the word embeddings. In other words, each column\nvector of Q or A is an embedding vector representing a single word. Given a pair of Q and A, the\ngoal is to predict a label y. For example, in textual entailment, Q may represent a premise and A a\nhypothesis, and y indicates whether Q entails A or contradicts A. In question answering, Q may\nbe a question and A a candidate answer, and y indicates whether A is the correct answer to Q.\nWe treat the problem as a supervised learning task. We assume that a set of training examples in the\nform of (Q, A, y) is given and we aim to learn a model that maps any pair of (Q, A) toay.\nAn overview of our model is shown in Figure[I] The model can be divided into the following four\nlayers:\n1. Preprocessing: We use a preprocessing layer (not shown in the figure) to process Q anc\nA to obtain two new matrices Q \u20ac R'*\u00ae and A \u20ac R'*4. The purpose here is to use som\ngate values to control the importance of different words in making the predictions on th\n\nsequence pair. For example, q; \u20ac R\u2019, which is the i\u201d column vector of Q, encodes the i\nword in Q.\n\n2. Attention: We apply a standard attention mechanism on Q and A to obtain attentior\nweights over the column vectors in Q for each column vector in A. With these attentior\nweights, for each column vector a; in A, we obtain a corresponding vector h;, which is at\nattention-weighted sum of the column vectors of Q.\n\n3. Comparison: We use a comparison function f to combine each pair of a; and h, into ;\nvector t;.\nIn the rest of this section we will present the model in detail. We will focus mostly on the comparisor\nfunctions we consider.\nInspired by the use of gates in LSTM and GRU, we preprocess Q and A with the following formulas:\n>| AO\n\no(W'Q +b! @ eg) \u00a9 tanh(W\"Q + b\" eg),\no(WiA + bi @e,) \u00a9 tanh(W\"A + b\" @ ea),\nwhere \u00a9 is element-wise multiplication, and Wi, W\" \u20ac R'*\u00a2 and bi, b\" \u20ac R! are parameters to\nbe learned. The outer product (- \u00ae ex) produces a matrix or row vector by repeating the vector\nor scalar on the left for X times. Here o(W'Q + b! @ eg) and o(W'A + bi @ eg) act as gate\nvalues to control the degree to which the original values of Q and A are preserved in Q and A. For\nexample, for stop words, their gate values would likely be low for tasks where stop words make little\ndifference to the final predictions.\nIn this preprocessing step, the word order does not matter. Although a better way would be to use\nRNN such as LSTM and GRU to chain up the words such that we can capture some contextual\ninformation, this could be computationally expensive for long sequences. In our experiments, we\nonly incorporated LSTM into the formulas above for the SNLI task.\nsoftmax Q\n((W\u00aeQ + b? @ eg)\"A)\nQ) A\n\nQc,"}, {"section_index": "5", "section_name": "2.3 COMPARISON", "section_text": "The goal of the comparison layer is to match each a;, which represents the j\"' word and its context\nin A, with h;, which represents a weighted version of Q that best matches aj. Let f denote a\ncomparison function that transforms a; and h; into a vector t; to represent the comparison result.\nA natural choice of f is a standard neural network layer that consists of a linear transformatio!\nfollowed by a non-linear activation function. For example, we can consider the following choice:\nNEURALNET (NN):\n\nt; = f(aj,hj) = reLuw |? /\nNEURALTENSORNET (NTN): tj = f(@, hy) = ReLU(ay TE; + b)\n4. Aggregation: We use a CNN layer to aggregate the sequence of vectors t; for the final\nclassification.\nAlthough this model follows more or less the same framework as the model proposed by|Parikh et al.\n), our work has some notable differences. First, we will pay much attention to the comparison\nfunction f and compare a number of options, including some uncommon ones based on element-\nwise operations. Second, we apply our model to four different datasets representing four different\ntasks to evaluate its general effectiveness for sequence matching problems. There are also some\nother differences from the work by/Parikh et al.] (2016). For example, we use a CNN layer instead of\nsummation and concatenation for aggregation. Our attention mechanism is one-directional instead\nof two-directional.\nwhere W\u00ae \u20ac R\u2019*! and b\u00ae \u20ac R\u2019 are parameters to be learned, G \u20ac R\u00ae** is the attention weight\nmatrix, and H \u20ac R'*4 are the attention-weighted vectors. Specifically, hj, which is the 7\" column\nvector of H, is a weighted sum of the column vectors of Q and represents the part of Q that best\nmatches the j word in A. Next we will combine h; and a; using a comparison function.\nHowever, we note that for many sequence matching problems, we intend to measure the semantic\nsimilarity or relatedness of the two sequences. So at the word level, we also intend to check how\nsimilar or related a; is to h,. For this reason, a more natural choice used in some previous work is\nEuclidean distance or cosine similarity between a; and h;. We therefore consider the following\ndefinition of f:\nEUCLIDEAN+COSINE (EUCCOsS): tj=\nNote that the operator \u00a9 is element-wise multiplication. For both comparison functions, the resulting\nvector t; has the same dimensionality as a; and h;.\nWe can see that SUB is closely related to Euclidean distance in that Euclidean distance is the sum\nof all the entries of the vector t; produced by SuB. But by not summing up these entries, SUB\npreserves some information about the different dimensions of the original two vectors. Similarly,\nMULT is closely related to cosine similarity but preserves some information about the original two\nvectors.\nFinally, we consider combining SUB and MULT followed by an NN layer as follows:\n(aj \u2014 hy) \u00a9 (aj \u2014 hy)\n\nSUBMULT+NN: t; = f(aj, hj) = ReLU(W ay \u00a9 hy\n\n+b).\nIn summary, we consider six different comparison functions: NN, NTN, EucCos, Sus, MULT and\nSUBMULT+NN. Among these functions, the last three (SUB, MULT and SUBMULT+NN) have not\nbeen widely used in previous work for word-level matching."}, {"section_index": "6", "section_name": "2.4 AGGREGATION", "section_text": "r \u20ac R\u2122 is then used for the final classification, where n is the number of windows in CNN.\nTable 2: The statistics of different datasets. Q:question/hypothesis, C:candidate answers for each\nquestion, A:answer/hypothesis, P:plot, w:word (average).\nNote that with EUCCOos, the resulting vector t; is only a 2-dimensional vector. Although EucCos\nis a well-justified comparison function, we suspect that it may lose some useful information from\nthe original vectors a; and h;. On the other hand, NN and NTN are too general and thus do not\ncapture the intuition that we care mostly about the similarity between 4; and h,.\nTo use something that is a good compromise between the two extreme cases, we consider the fol-\nlowing two new comparison functions, which operate on the two vectors in an element-wise manner.\nThese functions have been used previously by{Mou et al.|(2016).\nSUBTRACTION (SUB):\nVMIULTIPLICATION (MULT):\nAfter we apply the comparison function to each pair of a; and h; to obtain a series of vectors t;,\nfinally we aggregate these vectors using a one-layer CNN (Kim|{2014):\nMovieQA InsuranceQA WikiQA SNLI\n\ntrain dev test train dev test train dev test train dev test\n#Q 9848 1958 3138 13K IK _ 1.8K*2 873 126 243 S49K 9842 9824\n#C 5 5 5 50 500 500 10 9 10 - - -\n\n#winP = 873 866 914 - - - - - - - - -\n#winQ 106 106 108 7.2 7.2 7.2 6.5 6.5 6.4 14 15.2 15.2\n#winA 5.9 5.6 5.5 92.1 92.1 92.1 25.55 24.7 25.1 8.3 8.4 8.3\nModels MovieQA InsuranceQA WikiQA SNLI\n\n. dev test dev test] test2 MAP MRR train test\nCosine Word2Vec 46.4 45.63 - - - - - - -\nCosine TFIDF 47.6 47.36 - - - - - - -\nSSCB TFIDF 48.5 - - - - - - - -\nIR model - - 52.7 55.1 50.8 - - - -\nCNN with GESD - - 65.4 65.3 61.0 - - - -\nAttentive LSTM - - 68.9 69.0 64.8 - - - -\nIARNN-Occam - - 69.1 68.9 65.1 0.7341 0.7418 - -\nIARNN-Gate - - 70.0 70.1 62.8 0.7258 0.7394 - -\nCNN-Cnt - - - - - 0.6520 0.6652 - -\nABCNN - - - - - 0.6921 0.7108 - -\nCubeCNN - - - - - 0.7090 0.7234 - -\nW-by-W Attention - - - - - - - 85.3 83.5\nmatch-LSTM - - - - - - - 92.0 86.1\nLSTMN - - - - - - - 88.5 86.3\nDecomp Attention - - - - - - - 90.5 86.8\nEBIM+TreeLSTM - - - - - - - 93.0 88.3\nNN 31.6 - 76.8 74.9 72.4 0.7102 0.7224 89.3 86.3\nNTN 31.6 - 75.6 75.0 72.5 0.7349 0.7456 91.6 86.3\nEucCos 71.9 - 70.6 70.2 67.9 0.6740 0.6882 87.1 84.0\nSUB 64.9 - 70.0 71.3 68.2 0.7019 0.7151 89.8 86.8\nMULT 66.4 - 76.0 75.2 73.4 0.7433 0.7545 89.7 85.8\nSUBMULT+NN 72.1 72.9 77.0 75.6 72.3 0.7332 0.7477 89.4 86.8\nTable 3: Experiment Results\nModels MovieQA InsuranceQA WikiQA SNLI\n\ndev test dev test! test2 MAP MRR train test\n\nSUBMULT+NN (no preprocess) 72.0 - 72.8 73.8 70.7 0.6996 0.7156 89.6 82.8\nSUBMULT+4NN (no attention) 60.4 - 69.4 704 67.8 0.7164 0.7238 89.0 84.4\nTable 4: Ablation Experiment Results. \u201cno preprocess\u201d: remove the preprocessing layer by directly\nusing word embeddings Q and A to replace Q and A in Eqn_[t} \u201cno attention\u201d: remove the attention\nlayer by using mean pooling of Q to replace all the vectors of H in Eqn.|2]\nIn this section, we evaluate our model on four different datasets representing different tasks. The firs\nthree datasets are question answering tasks while the last one is on textual entailment. The statistic:\nof the four datasets are shown in Table [2] We will fist introduce the task settings and the way we\ncustomize the \u201ccompare-aggregate\u201d structure to each task. Then we will show the baselines for the\ndifferent datasets. Finally, we discuss the experiment results shown in Table[Jand the ablation study\nIn all these tasks, we use matrix Q \u20ac R\u00a2*\u00ae to represent the question or premise and matrix A; \u20ac\nR\u00a2*A4& (& \u20ac [1, K]) to represent the k'\" answer or the hypothesis. For the machine comprehension\ntask MovieQA (inact erat|porg , there is also a matrix P \u20ac R\u00a2*? that represents the plot of a\nmovie. Here Q is the length of the question or premise, Aj, the length of the k\" answer, and P the\nlength of the plot.\nFor the InsuranceQA (Feng et al.| ) dataset, the task is an answer selection task which need:\nto select the correct answer for a question from a candidate pool. For the WikiQA (Yang et al.\n\n2015) datasets, we need to rank the candidate answers according to a question. For both tasks\nFor the SNLI dataset, the task is text entailment, which identifies the relation-\nship (entailment, contradiction or neutral) between a premise sentence and a hypothesis sentence.\nHere K = 1, and there are exactly two sequences to match. The actual model structure is what we\nhave described before.\nthere are KK candidate answers for each question. Let us use rj, to represent the resulting vector\nproduced by Eqn. [9] for the k\"* answer. In order to select one of the K answers, we first define\n,\u00a52,.--,;P\u00ab]. We then compute the probability of the k\" answer to be the correct one as\np(k|[R) = softmax(w' tanh(W'R + b\u2019 @ ex) +b @ex),\nre Ws \u20ac Rx\"! we R!, b\u2019 \u20ac R', b \u20ac Rare parameters to be learned.\n|).\ntip\nN([ty.1,-\n\nCN\n\nrh =\nTo select an answer from the K candidate answers, again we use Eqn.\nties.\n\nto compute the probabili.\nThe implementation details of the modes are as follows. The word embeddings are initialized from\n\nGloVe (Pennington et al.|/2014). During training, they are not updated. The word embeddings not\njloVe are initialized w:\n\nfound in ith zero.\nThe dimensionality | of the hidden layers is set to be 150. We use ADAMAX\n2015) with the coefficients 6, = 0.9 and 62 = 0.999 to optimize the model. We do not use L2-\nregularization. The main parameter we tuned is the dropout on the embedding layer. For WikiQA,\nwhich is a relatively small dataset, we also tune the learning rate and the batch size. For the others,\nwe set the batch size to be 30 and the learning rate 0.002."}, {"section_index": "7", "section_name": "3.2 BASELINES", "section_text": "Here, we will introduce the baselines for each dataset. We did not re-implement these models bu\nsimply took the reported performance for the purpose of comparison.\nSNLI: e W-by-W Attention: The model by|Rocktaschel et al.|(2015), who first introduced attention\nmechanism into text entailment. e match-LSTM: The model by|Wang & Jiang) (2016b), which\nconcatenates the matched words as the inputs of an LSTM. e LSTMN: Long short-term memory-\n\nmodel proposed by e EBIM+TreeLSTM: The state-of-the-art model proposed\n\nby|Chen et al.\nWikiQA: e IARNN-Occam and IARNN-Gate as introduced before. e CNN-Cnt: This mode!\nby (2015) combines sentence representations built by a convolutional neural network\nwith logistic regression. e ABCNN: This model is Attention-Based Convolutional Neural Network\n\nproposed by[Win et al] 2015 ). e CubeCNN proposed by (2016) builds a CNN on all pairs\narity.\n\nof word simi\nMovieQA: All the baselines we consider come from work: e Cosine\n\nWord2Vec: A sliding window is used to select the answer according to the similarities computed\nFor the machine comprehension task MovieQA, each question is related to Plot Synopses written by\nfans after watching the movie and each question has five candidate answers. So for each candidate\nanswer there are three sequences to be matched: the plot P, the question Q and the answer A,,.. For\neach k, we first match Q and P and refer to the matching result at position j as th, as generated by\none of the comparison functions f. Similarly, we also match A;, with P and refer to the matching\nresult at position 7 as t?. .. We then define\nInsuranceQA: e IR model: This model by [Bendersky et al.](2010) learns the concept information\nto help rank the candidates. e CNN with GESD: This model by (2015) uses Euclidean\ndistance and dot product between sequence representations built through convolutional neural net-\nworks to select the answer. e Attentive LSTM: [Tan et al.| (2016) used soft-attention mechanism\nto select the most important information from the candidates according to the representation of the\nquestions. e IARNN-Occam: This ores Rear adds regularization on the attention\nweights. e IARNN-Gate: This model by (2016) uses the representation of the question\n\nto build the GRU gates for each candidate answer.\nthrough Word2Vec between the sentences in plot and the question/answer. e Cosine TFIDF: Thi:\nmodel is similar to the previous method but uses bag-of-word with tf-idf scores to compute similar.\nity. e SSCB TFIDF: Instead of using the sliding window method, a convolutional neural network is\nbuilt on the sentence level similarities.\nWe use accuracy as the evaluation metric for the datasets MovieQA, InsuranceQA and SNLI, as there\nis only one correct answer or one label for each instance. For WikiQA, there may be multiple correct\nanswers, so evaluation metrics we use are Mean Average Precision (MAP) and Mean Reciprocal\nRank (MRR).\nWe observe the following from the results. (1) Overall, we can find that our general \u201ccompare\naggregate\u201d structure achieves the best performance on MovieQA, InsuranceQA, WikiQA dataset:\nand very competitive performance on the SNLI dataset. Especially for the InsuranceQA dataset\nwith any comparison function we use, our model can outperform all the previous models. (2) The\ncomparison method SUBMULT+NN is the best in general. (3) Some simple comparison function:\ncan achieve better performance than the neural networks or neural tensor network comparison func\ntions. For example, the simplest comparison function EUCCOS achieves nearly the best performanc\u00ab\nin the MovieQA dataset, and the element-wise comparison functions, which do not need parameter:\ncan achieve the best performance on the WikiQA dataset. (4) We find the preprocessing layer anc\nthe attention layer for word selection to be important in the \u201ccompare-aggregate\u201d structure throug!\nthe experiments of removing these two layers separately. We also see that for sequence matchins\nwith big difference in length, such as the MovieQA and InsuranceQA tasks, the attention laye\nplays a more important role. For sequence matching with smaller difference in length, such a:\nthe WikiQA and SNLI tasks, the pre-processing layer plays a more important role. (5) For the\nMovieQA, InsuranceQA and WikiQA tasks, our preprocessing layer is order-insensitive so that i\nwill not take the context information into consideration during the comparison, but our model cat\nstill outperform the previous work with order-sensitive preprocessing layer. With this finding, we\nbelieve the word-by-word comparison part plays a very important role in these tasks. We will furthe\nexplore the preprocessing layer in the future."}, {"section_index": "8", "section_name": "4 RELATED WORK", "section_text": "We review related work in three types of general structures for matching sequences\nSiamense network: These kinds of models use the same structure, such as RNN or CNN, to build\nthe representations for the sequences separately and then use them for classification. Then cosine\n\nsimilarity (Feng et al. 15}/Yang et al. 15), element-wise operation (Tai et al Mov et al.\nimilarity ig et al.||2015 ig et al.|/2015), el ise operati i et al.|/2015 i\n\nor neural network-based combination (2015) are used for sequence matching.\nAttentive network: Soft-attention mechanism (Bahdanau et al} 2014}|Luong et al.||2015) has been\nwidely used for sequence matching in machine comprehension (Hermann et al.||2015), text entail-\n\nment (Rocktaschel et al. Tan et al. . Instead of using the\ng\n\n) and question answering (\n\nfinal state of RNN to represent a sequence, these studies use weighted sum of all the states for the\nsequence representation.\nCompare-Aggregate network: This kind of framework is to perform the word level match-\n\ning (Wang & Jiang} |2016a}}Parikh et al. He & Lin Trischler et al. Wan et al.\ni & Ji 2016 ikh et al.||2016 & Lin} }2016;|Trischl 1.) 2016 1\nTo further explain how our model works, we visualize the max values in each dimension of the\nconvolutional layer. We use two examples shown in Table [I] from MovieQA and InsuranceQA\ndatasets respectively. In the top of Figure [2] we can see that the plot words that also appear in\neither the question or the answer will draw more attention by the CNN. We hypothesize that if the\nnearby words in the plot can match both the words in question and the words in one answer, then\nthis answer is more likely to be the correct one. Similarly, the bottom one of Figure [2] also shows\nthat the CNN will focus more on the matched word representations. If the words in one answer\ncontinuously match the words in the question, this answer is more likely to be the correct one.\n\u2014\u2014T\npo\n\n11-1\npo\n\nQuestion: Where does Sam marry Rosie?\nSEES\n\nT\nL\n\nT\n1\n\ncooooco\nanmntn\n\nuoljejuasaiday\n\nu03}0D\naisoy\ns@luew\nwes\n194M\nSalus\nau\n\noy\nuinja\nsuiqqoH\naul\n\nsuiqqoH\n4ayjo\nau\n\npue\nopoy\na10jeq\nBuimoq\nuoljzeuol0d\nsiy\n\nje\nquasaid\ne\na10jeq\nusanb\nsiy\n\nse\nuaMuy\nburye}\npue\nJopuoy\n4o\n\nbuy\npeuMmold\nSI\nwobeiy\nB/DIYOA\n\ne\n\numo\n\nyou\n\nop\n\nnoA\nuayM\naBelsao>d\nnoA\naplaAoid\nqeuyy\naBeisao2\nAjuo\nAyigel\nAyesidAy\naq\n\nsiyy\nA>d1od\nJaumouou\nauweu\n\ne\n\nye\n\naq\n\nyeum\naseyoind\nIM\n\nnoA\nB]D14aA\n\ne\n\numo\nyNOYM\n@2ueinsul\nojne\n\naaey\nalqissod\naq\n\ny!\n\nsok\n\nelolololojolo}\nANMTNO\n\n1o1je}UaSeld3y\n\nAnswer\nFigure 2: An visualization of the largest value of each dimension in the convolutional layer of CNN\nThe top figure is an example from the dataset MovieQA with CNN window size 5. The bottor\nfigure is an example from the dataset InsuranceQA with CNN window size 3. Due to the sparsity\nof the representation, we show only the dimensions with larger values. The dimensionality of the\nraw representations is 150.\n. Our work is under this framework. But our structure is different from previous models and\nour model can be applied on different tasks. Besides, we analyzed different word-level comparisor\nfunctions separately."}, {"section_index": "9", "section_name": "5 CONCLUSIONS", "section_text": "In this paper, we systematically analyzed the effectiveness of a \u201ccompare-aggregate\u201d model on fou\ndifferent datasets representing different tasks. Moreover, we compared and tested different kind:\nof word-level comparison functions and found that some element-wise comparison functions car\noutperform the others. According to our experiment results, many different tasks can share the\nsame \u201ccompare-aggregate\u201d structure. In the future work, we would like to test its effectiveness or\nmulti-task learning.\nThis research is supported by the National Research Foundation, Prime Ministers Office, Singapore\nunder its International Research Centres in Singapore Funding Initiative."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly\nlearning to align and translate. In Proceedings of the International Conference on Learning Rep:\nresentations, 2014.\nT1\u2014-T\npou\n\nT\n1\n\n1 1\u2014-F\nFe\n\nQuestion: Where does Sam marry Rosie?\n:\n\ncooooco\nanmntn\n\nuoljejuasaiday\n\nu03}0D\naisoy\ns@luew\nwes\n194M\nSalus\nau\n\noy\nuinja\nsuiqqoH\naul\n\nsuiqqoH\n4ayjo\nau\n\npue\nopoy\na10jeq\nBuimoq\nuoljzeuol0d\nsiy\n\nje\nquasaid\ne\na10jeq\nusanb\nsiy\n\nse\nuaMuy\nburye}\npue\nJopuoy\n4o\n\nbuy\npeuMmold\nSI\nwobeiy\n\nypequod\nB/DIYOA\n\ne\n\numo\n\nyou\n\nop\n\nnoA\nuayM\naBelsao>d\nnoA\naplaAoid\nqeuyy\naBeisao2\nAjuo\n\nT\n\nT\n\ni)\n= awieu\ne\n\nWe\n\naq\n\nyeum\naseyoind\nIM\n\nnoA\nB]D14aA\n\ne\n\nAnswer\n\n\u2122T\n\nT\n\numo\nyNOYM\n@2ueinsul\nojne\n\naaey\nalqissod\naq\n\ny!\n\nsok\n\nT\n\nQuestion: Can | have auto insurance without a car\nT\n\nToT\n\nlelolololojolo}\nANMTNO\n\nuonequsesaiday\nQian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. Enhancing and combining sequen.\ntial and tree LSTM for natural language inference. arXiv preprint arXiv: 1609.06038, 2016.\nMinwei Feng, Bing Xiang, Michael R Glass, Lidan Wang, and Bowen Zhou. Applying deep learning\nto answer selection: A study and an open task. In 2015 IEEE Workshop on Automatic Speech\nRecognition and Understanding (ASRU). pp. 813-820. IEEE. 2015.\nYoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the Con\n\nforonro nn Emnijricgl Mothnde in Natural Tanannaoce DPrnreccina MW1A\nHua He and Jimmy Lin. Pairwise word interaction modeling with deep neural networks for semantic\nsimilarity measurement. In Proceedings of the 2016 Conference of the North American Chapter\nof the Association for Computational Linguistics: Human Language Technologies, 2016.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa\nSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of the\nConference on Advances in Neural Information Processing Systems, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of\nthe International Conference on Learning Representations, 2015.\nRichard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y\nNg, and Christopher Potts. Recursive deep models for semantic compositionality over a senti-\nment treebank. In Proceedings of the Conference on Empirical Methods in Natural Language\nProcessing, 2013.\nShengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, and Xueqi Cheng. Match-srnn:\nModeling the recursive matching structure with spatial RNN. International Joint Conference on\nArtificial Intelligence, 2016.\nBingning Wang, Kang Liu, and Jun Zhao. Inner attention based recurrent neural networks for answe\neelection In Proceedinos of the Conference ann Accncinatinn tor Computational] Tinouicticse OO1\u00a2\nShuohang Wang and Jing Jiang. Machine comprehension using match-LSTM and answer pointer\narXiv preprint arXiv: 1608.07905, 2016a.\nWenpeng Yin, Hinrich Schiitze, Bing Xiang, and Bowen Zhou. ABCNN: Attention-based convolu\ntional neural network for modeling sentence pairs. arXiv preprint arXiv: 1512.05193, 2015.\nMakarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja\nFidler. MovieQA: Understanding stories in movies through question-answering. In Proceedings\nof IEEE Conference on Computer Vision and Pattern Recognition, 2016.\nAdam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, and Kaheer Suleman. A\nparallel-hierarchical model for machine comprehension on sparse data. In Proceedings of the\nConference on Association for Computational Linguistics. 2016.\nShuohang Wang and Jing Jiang. Learning natural language inference with LSTM. In Proceedings of\nthe Conference on the North American Chapter of the Association for Computational Linguistics,\n2016b.\n\nYi Yang, Wen-tau Yih, and Christopher Meek. WikiQA: A challenge dataset for open-domain\nquestion answering. In Proceedings of the Conference on Empirical Methods in Natural Language\nProcessing, 2015."}]
ry18Ww5ee
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "In an effort to develop more efficient search methods, the problem of hyperparameter optimization has\n\nrecently been dominated by Bayesian optimization methods (Snoek et al.|{2012}/Hutter et al.| {2011}\nthat focus on optimizing hyperparameter configuration selection. These methods\naim to identify good configurations more quickly than standard baselines like random search by\nselecting configurations in an adaptive manner; see Figure}1(a)} Existing empirical evidence suggests\nthat these methods outperform random search (Thornton et al.||2013}/Eggensperger et al. 2013} Snoek!\ne 5). However, these methods tackle a fundamentally challenging problem of simultaneously\nfitting and optimizing a high-dimensional, non-convex function with unknown smoothness, and\npossibly noisy evaluations. To overcome these difficulties, some Bayesian optimization methods\nresort to heuristics, at the expense of consistency guarantees, to model the objective function or speed\nup resource intensive subroutines|'| Moreover, these adaptive configuration selection methods are\nintrinsically sequential and thus difficult to parallelize.\nAn orthogonal approach to hyperparameter optimization focuses on speeding up configuratiot\nevaluation; see Figure[1(b)] These methods are adaptive in computation, allocating more resource\nto promising hyperparameter configurations while quickly eliminating poor ones. Resources cat\ntake various forms, including size of training set, number of features, or number of iterations fo\niterative algorithms. By adaptively allocating these resources, these methods aim to examine orders o\nmagnitude more hyperparameter configurations than methods that uniformly train all configurations t\ncompletion, thereby quickly identifying good hyperparameters. While there are methods that combin\nBayesian optimization with adaptive resource allocation (Swersky et al.| Domhan et al\n, we focus on speeding up random search as it offers a simple, parallelizable, and theoreticall\nprincipled launching point and is shown to outperform grid search (Bergstra & Bengio}|2012).\nency can be restored by allocating a fraction of resources to performing random search."}, {"section_index": "1", "section_name": "HYPERBAND: BANDIT-BASED CONFIGURATION EVAL-\nUATION FOR HYPERPARAMETER OPTIMIZATION", "section_text": "Lisha Li*, Kevin Jamieson**, Giulia DeSalvo', Afshin Rostamizadeh*, and Ameet Talwalkar\u2019\n\nLTTS\u201cT A RKTT TD 21.21 tyraztt ...1tm...1."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The task of hyperparameter optimization is becoming increasingly important as modern data analysis\npipelines grow in complexity. The quality of a predictive model critically depends on its hyperpa-\nrameter configuration, but it is poorly understood how these hyperparameters interact with each\nother to affect the quality of the resulting model. Consequently, practitioners often default to either\nhand-tuning or automated brute-force methods like random search and grid search.\n(a) Configuration Selection\n\nLoss\n\n% %\nResources\n\n(b) Configuration Evaluation\n\n%\n\nResources Allocated\n\n(c) Envelopes\nFigure 1: (a) The heatmap shows the validation error over a two dimensional search space, with\nred corresponding to areas with lower validation error, and putative configurations selected in a\nsequential manner as indicated by the numbers. (b) The plot shows the validation error as a function\nof the resources allocated to each configuration (i.e., each line in the plot). Configuration evaluation\nmethods allocate more resources to promising configurations. (c) The validation loss as a function of\ntotal resources allocated for two configurations. The shaded areas bound the maximum distance from\nthe terminal validation loss and monotonically decreases with the resource.\nOur novel configuration evaluation method, HYPERBAND, relies on a principled early-stoppin;\nstrategy to allocate resources, allowing it to evaluate orders of magnitude more configurations that\nuniform allocation strategies. HYPERBAND is a general-purpose technique that makes minima\n\nassumptions, unlike prior configuration evaluation approaches (Swersky et al.| 2013} Domhan eta\n2015} Swersky et al. 2014} Gyorgy & Kocsis}|201 it} Agarwal et al.|/2011). In this work, we describ:\n\nHYPERBAND, provide intuition for the algorithm through a detailed example, and present a wid\nrange of empirical results comparing HYPERBAND with well established competitors. We also briefl\ndescribe the theoretical underpinnings of HYPERBAND, however a thorough theoretical treatment i\n\nbeyond the scope of this paper and is deferred to|Li et al.] (2016).\nBayesian optimization techniques model the conditional probability p(f|A) of a configuration\u2019s\nperformance on a metric f given a set of hyperparameters \\. For instance, SMAC uses random forests\nto model p(f|A) as a Gaussian distribution (Huerta 2011, TPE is a non-standard Bayesiar\noptimization algorithm based on tree-structured Parzen density estimators ae A\n\nthird popular method, Spearmint, uses Gaussian processes (GP) to model p(f|A) and performs slice\n\nsampling over the GP\u2019s hyperparameters (Snoek et al.]/2012).\nAdaptive configuration evaluation is not a new idea. [Maron & Moore] (1993) considered a setting\n\nwhere training time is negligible (e.g., k-nearest-neighbor classification) and evaluation on a large\nvalidation set is accelerated by evaluating on an increasing subset of the validation set, stopping\nearly configurations that are performing poorly. Since subsets of the validation set provide unbiased\nestimates of its expected performance, this is an instance of the stochastic best-arm identification\n\nproblem for multi-armed bandits (see|Jamieson & Nowak ) for a brief survey).\nIn contrast, this paper assumes that evaluation time is negligible and the goal is to early-stop long\nrunning training procedures by evaluating partially trained models on the validation set. Previous\napproaches either require strong assumptions or use heuristics to perform adaptive resource allocation\nSeveral works propose methods that make strong assumptions on the convergence behavior of training\nalgorithms, providing theoretical performance guarantees under these assumptions (Gy\u00e9rgy & Kocsis\n2011} |Agarwal et al.| 2011} Swersky et al.| 2013} 2014} Domhan et al.| 2015} |Sabharwal et al.\n2016). Unfortunately, these assumptions are often hard to verify, and empirical performance can\ndrastically suffer when they are violated. One recent work of particular interest proposes a heuristic\nbased on sequential analysis to determine stopping times for training configurations on increasing\nsubsets of the data (Krueger et al.||2015). However, it has a few shortcomings: (1) it is designed\nto speedup multi-fold cross-validation and is not significantly faster than standard holdout, (2) it\nis not an anytime algorithm and requires the set of configurations to be evaluated as an input, and\n(3) the theoretical correctness and empirical performance of this method are highly dependent on\na user-defined \u201csafety-zone.\u2019}*| Lastly, in an effort avoid heuristics and strong assumptions, |Sparks\n(2015) proposed a halving style algorithm that did not require explicit convergence behavior,\nand (2015) analyzed a similar algorithm, providing theoretical guarantees and\nencouraging empirical results. Unfortunately, these halving style algorithms suffer from the n vs\nB/n issue which we will discuss in Section\nFinally,{Klein et al.|(2016) recently introduced Fabolas, a Bayesian optimization method that combine:\nadaptive selection and evaluation. Similar to|Swersky et al. (2013} 2014), it models the conditional\nvalidation error as a Gaussian process using a kernel that captures the covariance with downsampling\nrate to allow for adaptive evaluation. While we intended to compare HYPERBAND with Fabolas, we\nencountered some technical difficulties when using the packagd*| and are working with the authors of\nto resolve the issues.\nHYPERBAND extends the SUCCESSIVEHALVING algorithm proposed for hyperparameter optimiza-\ntion inJJamieson & Talwalkar| 2015) and calls it as a subroutine. The idea behind SUCCESSIVE-\nHALVING follows directly from its name: uniformly allocate a budget to a set of hyperparameter\nconfigurations, evaluate the performance of all configurations, throw out the worst half, and repeat\nuntil one configurations remains. The algorithm allocates exponentially more resources to more\npromising configurations. Unfortunately, SUCCESSIVEHALVING requires the number of configu-\nrations n as an input to the algorithm. Given some finite time budget B (e.g. an hour of training\ntime to choose a hyperparameter configuration), B/n resources are allocated on average across\nthe configurations. However, for a fixed B, it is not clear a priori whether we should (a) consider\nmany configurations (large n) with a small average training time; or (b) consider a small number of\nconfigurations (small n) with longer average training times.\nWe use a simple example to better understand this tradeoff. Figure}1(c)|shows the validation loss as ;\nfunction of total resources allocated for two configurations with terminal validation losses 1, and v-\nThe shaded areas bound the maximum deviation from the terminal validation loss and will be referrec\nto as \u201cenvelope\u201d functions. It is possible to differentiate between the two configurations when thi\nenvelopes diverge. Simple arithmetic shows that this happens when the width of the envelopes i\nless than v2 \u2014 11, i.e. when the intermediate losses are guaranteed to be less than ~ aon away fron\nthe terminal losses. There are two takeaways from this observation: more resources are needed t\u00a2\ndifferentiate between the two configurations when either (1) the envelope functions are wider or (2\nthe terminal losses are closer together.\nHowever, in practice, the optimal allocation strategy is unknown because we do not have knowledg\u00a2\nof the envelope functions nor the distribution of terminal losses. Hence, if more resources ar\u00a2\nrequired before configurations can differentiate themselves in terms of quality (e.g., if an iterative\ntraining method converges very slowly for a given dataset or if randomly selected hyperparamete!\nconfigurations perform similarly well) then it would be reasonable to work with a small numbe:\nof configurations. In contrast, if the quality of a configuration is typically revealed using minima\nresources (e.g., if iterative training methods converge very quickly for a given dataset or if randomly\nselected hyperparameter configurations are of low-quality with high probability) then n is the\nbottleneck and we should choose n to be large."}, {"section_index": "3", "section_name": "3.1 HYPERBAND", "section_text": "HYPERBAND, shown in Algorithm|1} addresses this \u201cn versus B/n\u201d problem by considering several\npossible values of n for a fixed B, in essence performing a grid search over feasible value of n.\nAssociated with each value of n is a minimum resource r that is allocated to all configurations before\nsome are discarded; a larger value of n corresponds to a smaller r and hence more aggressive early\nstopping. There are two components to HYPERBAND; (1) the inner loop invokes SUCCESSIVEHALV-\nING for fixed values of n and r (lines 3-9) and (2) the outer loop which iterates over different values\n?The first two drawbacks prevent a full comparison to HYPERBAND on o}\nfor completeness, we provide a comparison in Appendi: o|Krueger et al\nreplicated from their paper.\n\nselected empirical tasks, however.\non some experimental tasks\nof n and r (lines 1-2). We will refer to each such run of SUCCESSIVEHALVING within HYPERBAND\nas a \u201cbracket.\u201d Each bracket is designed to use about B total resources and corresponds to a different\ntradeoff between n and B/n. A single execution of HYPERBAND takes a finite number of iterations\nand in practice can be repeated indefinitely.\nHYPERBAND requires two inputs (1) R, the maximum amount of resource that can be allocated to a\nsingle configuration, and (2) 7, an input that controls the proportion of configurations discarded in each\nround of SUCCESSIVEHALVING. The two inputs dictate how many different brackets are considered\nspecifically, Smax + 1 different values for n are considered with smax = [log,, (R) J. HYPERBAND\nbegins with the most aggressive bracket s = Sax, Which sets n to maximize exploration, subject\nto the constraint that at least one configuration is allocated R resources. Each subsequent bracket\nreduces n by a factor of approximately 7) until the final bracket, s = 0, in which every configuration is\nallocated R resources (this bracket simply performs classical random search). Hence, HYPERBAND\nperforms a geometric search in the average budget per configuration to address the \u201cn versus B/n\u2019\nproblem, at the cost of approximately sx +1 times more work than running SUCCESSIVEHALVING\nfor a fixed n. By doing so, HYPERBAND is able to exploit situations in which adaptive allocation\nworks well. while protecting itself in situations where more conservative allocations are required.\nAlgorithm 1: HYPERBAND algorithm for hyperparameter optimization.\neg ee eee eae eee nee ee VP eae\n\ninput LR, n (default 7 = 3)\ninitialization : smax = [log,(R)|, B = (max + 1)R\n1 fors\u20ac {8maxsSmax \u20141,...,0}do\n2 n= iba Grp h r= Rn-*\n\n// begin SuccESSIVEHALVING with (n,r) inner loop\n\n3 T =get_hyperparameter_configuration(n)\n\n4 for i \u20ac {0,...,: s} do\n\ns || male\n\n6 r=rn\n\n7 L = {run_then_return_val_loss(t,ri):t \u20acT}\ns | | T=top-K(T,L, Lni/n)\n\n9 end\n\n10 end\n\nui return Configuration with the smallest intermediate loss seen so far.\nr= TIY\nL = {run_then_return_val-_loss(t,r;):t\u00a2\u20acT\nT =top (7, L, [ri/n|)\nR represents the maximum amount of resources that can be allocated to any given configuration. In\nmost cases, there is a natural upper bound on the maximum budget per configuration that is often\ndictated by the resource type (e.g., training set size for dataset downsampling; limitations based\non memory constraint for feature downsampling; rule of thumb regarding number of epochs when\niteratively training neural networks). R is also the number of configurations evaluated in the bracket\nthat performs the most exploration, i.e s = Smax. In practice one may want n < nmax to limit\noverhead associated with training many configurations on a small budget, i.e. costs associated with\ninitialization, loading a model, and validation. In this case, set Smax = |log, (Mmax) |.\nThe value of 7 can be viewed as a knob that can be tuned based on practical user constraints.\nLarger values of 7) correspond to a more aggressive elimination schedule and thus fewer rounds of\nelimination; specifically, each round retains 1/7 configurations for a total of |log,, (nm) | + 1 rounds of\nelimination with n configurations. If one wishes to receive a result faster at the cost of a sub-optimal\nasymptotic constant, one can increase 1) to reduce the budget per bracket B = (|log,,(R)| + 1)R\n\nWe stress that results are not very sensitive to the choice of 7. In practice we suggest taking 17 to be\nequal to 3 or 4.\nHYPERBAND requires the following methods to be defined for any given learning problem:\nget_hyperparameter_configuration (n) returns a set of ni.i.d. samples from some dis-\ntribution defined over the hyperparameter configuration space; run_then_return_val-_loss (t,\nr) takes a hyperparameter configuration (t) and resource allocation (r) and returns the validation\nloss after training for the allocated resources; and top_k (configs, losses, k) takes a set of\nconfigurations as well as their associated losses and returns the top & performing configurations.\nWe further define the number of iterations as the resource to allocate, with one unit of resource\ncorresponding to one epoch or a full pass over the dataset. We set R to 81 and use the default value of\n\n7 = 3, resulting in sax = 4 and thus 5 brackets of SUCCESSIVEHALVING with different tradeoffs\nbetween n and B/n. The resources allocated within each bracket are displayed in Table/1]\ns=4 s=3 s=2 s=l1 s=0\nifn re |r re | nm re lnm om | mT\noO; 81 1 |27 3 7,9 9 |6 27/5 81\n1/27 3 |9 9 |}3 27/2 ~~ 81\n2/9 9 |}3 27)1~ 81\n3}3 27]; 1 81\n4}1 81\nTable 1: Values of n; and r; for the brackets of HYPER-\nBAND when R = 81 and 7 = 3.\nFigure[2|compares the empirical performance of the different brackets of HYPERBAND if they were\nused separately, as well as standard HYPERBAND (all results are averaged over 70 trials). In practice\nwe do not know a priori which bracket s \u20ac {0,...,4} will be most effective, and in this case neither\nthe most (s = 4) nor least aggressive (s = 0) setting is optimal. However, note that HYPERBAND\ndoes nearly as well as the optimal bracket (s = 3) and vastly outperforms the baseline uniform\nallocation (i.e. random search), which is equivalent to bracket s = 0."}, {"section_index": "4", "section_name": "3.3. OVERVIEW OF THEORETICAL RESULTS", "section_text": "Although a\n\netailed theoretical analysis is beyond the scope of this paper, we provide an intuitive\n\nhigh-level description of theoretical properties of HYPERBAND. Suppose there are n configuration!\neach with a given terminal validation error v; fori = 1,..., n. Without loss of generality, index th\nconfigurations by performance so that 1; corresponds to the best performing configuration, v2 to th\n\nsecond best,\nstrategy wou\n\nand so on. Now consider the task of identifying the best configuration. The optim\nd allocate to each configuration 7 the minimum resource required to distinguish it fror\n\n1, Le., enough so that the envelope functions depicted in Figure[1(c)|bound the intermediate loss t\n\nbe less th\n\nIn contrast, tl\n\n=u\n\naway from the terminal value. As shown in Tamieson & Talwalkar| 2015) and IL\nthe budget required by SUCCESSIVEHALVING is in fact only a small factor away fron\n\nis optimal approach because it capitalizes on configurations that are easy to distinguish from v\n\nhe naive uniform allocation strategy, which allocates B/n to each configuration, has t\n\nallocate to every configuration the resource required to distinguish v2 from 1).\nThe relative size of the budget required for uniform allocation and SUCCESSIVEHALVING depends\non the envelope functions bounding deviation from terminal losses as well as the distribution from\nwhich v;\u2019s are drawn. The budget required for SUCCESSIVEHALVING is smaller when the optimal\nn versus B/n tradeoff requires fewer resources per configuration. Hence, if the envelope functions\ntighten quickly as a function of resource allocated, or the average distances between terminal losses\nis large, then SUCCESSIVEHALVING can be substantially faster than uniform allocation. Of course\nwe do not have knowledge of either function in practice, so we will hedge our aggressiveness\nwith HYPERBAND. Remarkably, despite having no knowledge of the envelope functions or the\ndistribution of v;\u2019s, HYPERBAND requires a budget that is only log factors larger than the optimal for\nSUCCESSIVEHALVING. See!Li et al.](2016) for details.\nWe next present a simple example to provide intuition. We work with the MNIST dataset and optimize\nhyperparameters for the LeNet convolutional neural network trained using mini-batch SGD. Our\nsearch space includes learning rate, batch size, and number of kernels for the two layers of the\nnetwork as hyperparameters (details are shown in TableB]in Appendix[Ap.\nTest error\n\n05\n\n4\n\nHyperband\n\n1.0 15\nSeconds\n\n2.0\n1e6\nFigure 2: Performance of individ-\nual brackets s and HYPERBAND.\nPrerage eek ener\n\n0.30 0.10\n%\u2014* hyperband (finite) \u2014 spearmint\n030|| \u2014 SMAC \u2014 random 0.29 0.09\n4-4 SMAC (Early Stop) + random_2x | 0.28 \u00ab0.08\n0.28||\u2014 TE \u2014 brackets=4/| 5 5\n5 0.27 5\n0.26! 4 gor\n& 0.26 8\n0.24 fa 90.06\n\u00a30.25 8 aos\n0.22} g go.\n< 0.24 <\n0.20} 0.04\n0.23 |\noral oon 0.03\noO 10 20 30 40 50 \u201d 10 20 30 40 50 oO 10 20 30 40\nMultiple of R Used Multiple of R Used Multiple of R Used\n(a) CIFAR-10 (b) MRBI (c) SVHN\n\n50"}, {"section_index": "5", "section_name": "4 HYPERPARAMETER OPTIMIZATION EXPERIMENTS", "section_text": "In this section, we evaluate the empirical behavior of HYPERBAND with iterations, data subsamples.\nand features as resources. For all experiments, we compare HYPERBAND with three well known\nBayesian optimization algorithms \u2014 SMAC, TPE, and Spearmint. Additionally, we show results for\nSUCCESSIVEHALVING corresponding to repeating the most exploration bracket of HYPERBAND\nFinally for all experiments, we benchmark against standard random search and random_2 x, which is\na variant of random search with twice the budget of other methods.\nWe study a convolutional neural network with the same architecture as that used in[Snoek et al.|(2012\nand{Domhan et al.|(2015) from cuda-convnet. The search spaces used in the two previous work:\ndiffer, and we used a search space similar to that of Snoek et al.| with 6 hyperparameters fo\nstochastic gradient decent and 2 hyperparameters for the response normalization layers. In line with\nthe two previous works, we used a batch size of 100 for all experiments. For these experiments, we\nalso compare against a variant of SMAC named SMAC_early that uses the early termination criteriot\nproposed in/Domhan et al.|(2015) for neural networks. We view SMAC with early stopping to be :\ncombination of adaptive configuration selection and configuration evaluation. See Appendix[A]fo:\nmore details about the experimental setup.\nDatasets: We considered three image classification datasets: CIFAR-10 (Krizhevsky| [2009}, rotatec\n\nMNIST with background images (MRBI) (Larochelle et al.|!2007), and Street View House Number:\n(SVHN) 2011). CIFAR-10 and SVHN contain 32 x 32 RGB images while MRB|\ncontains 28 x 28 grayscale images. The splits used for each dataset are as follows: (1) CIFAR-10 has\n\n40k, 10k, and 10k instances; (2) MRBI has 10k, 2k, and 50k instances; and (3) SVHN has close tc\n600k, 6k, and 26k instances for training, validation, and test respectively. For all datasets, the only\npreprocessing performed on the raw images was demeaning.\nHYPERBAND Configuration: For these experiments, one unit of resource corresponds to 100 mini\nbatch iterations. For CIFAR-10 and MRBI, R is set to 300 (or 30k total iterations). For SVHN, R\nis set to 600 (or 60k total iterations) to accommodate the larger training set. 7) was set to 4 for all\nexperiments, resulting in 5 SUCCESSIVEHALVING brackets for HYPERBAND.\nResults: Ten independent trials were performed for each searcher. For CIFAR-10, the results it\nFigure[B{a) show that HYPERBAND is more than an order of magnitude faster than its competitor:\nIn Figure [6] of Appendix [A] we extend the x-axis for CIFAR-10 out to 100R. The results shov\nthat Bayesian optimization methods ultimately converge to similar errors as HYPERBAND. Fo\nMRBI, HYPERBAND is more than an order of magnitude faster than standard configuration selectio:\napproaches and 5x faster than SMAC with early stopping. For SVHN, while HYPERBAND find\na good configuration faster, Bayesian optimization methods are competitive and SMAC with earl\nstopping outperforms HYPERBAND. This result demonstrates that there is merit to incorporatin;\nearly stopping with configuration selection approaches.\nFigure 3: Average test error across 10 trials is shown in all plots. Label \u201cSMAC early\u201d corresponds to\nSMAC with the early stopping criterion proposed in|Domhan et al.|(2015) and label \u201cbracket s = 4\u201d\ncorresponds to repeating the most exploratory bracket of HYPERBAND.\nFor computationally expensive problems in high dimensional search spaces, it may make sense to\njust repeat the most exploratory brackets. Similarly, if meta-data is available about a problem or it\nis known that the quality of a configuration is evident after allocating a small amount of resource,\nthen one should just repeat the most exploration bracket. Indeed, for these experiments, repeating\nthe most exploratory bracket of HYPERBAND outperforms cycling through all the brackets. In fact,\nbracket s = 4 vastly outperforms all other methods on CIFAR-10 and MRBI and is nearly tied with\nSMAC early for first on SVHN.\nFinally, CIFAR-10 is a very popular dataset and state-of-the-art models achieve much better accuracy)\nthan what is shown in FigureB The difference in performance is mainly attributable to higher mode!\ncomplexities and data manipulation (i.e. using reflection or random cropping to artificially increase the\ndataset size). If we limit the comparison to published results that use the same architecture and exclude\ndata manipulation, the best human expert result for the dataset is 18% error and hyperparamete:\noptimized result is 15.0% for|Snoek et al }(2012f\"Jand 17.2% for|Domhan et al.| (2015). These result:\nare better than our results on CIFAR-10 because they use 25% more data by including the validatior\nset and also train for more epochs. The best model found by HYPERBAND achieved a test error o!\n17.0% when trained on the combined training and validation data for 300 epochs.\nshows that HYPERBAND returns a good configuration after just the first SUCCESSIVEHALV\nING bracket in approximately 20 minutes; other searchers fail to reach this error rate on average\neven after the entire 12 hours. Notably, HYPERBAND was able to evaluate over 250 configurations\nin this first bracket of SUCCESSIVEHALVING, while competitors were able to evaluate only three\nconfigurations in the same amount of time. Consequently, HYPERBAND is over 30x faster thar\nBayesian optimization methods and 70x faster than random search. Bracket s = 4 sightly outper\nforms HYPERBAND but the terminal performance for the two algorithms are the same. Random_2x\nis competitive with SMAC and TPE.\nWe next demonstrate the performance of HYPERBAND when using features as a resource, focusing\non random feature approximations for kernel methods. Features are randomly generated using the\nmethod described in|Rahimi & Recht] (2007) to approximate the RBF kernel, and these random\nfeatures are then used as inputs to a ridge regression classifier. We consider hyperparameters of\na random feature kernel approximation classifier trained on CIFAR-10, including preprocessing\nmethod, kernel length scale, and /2 penalty. We impose an upper bound of 100k random features\nfor the kernel approximation so that the data will comfortably fit into a machine with 60GB of\n\u201cWe were unable to reproduce this result even after receiving the optimal hyperparameters from the author:\nthrough a personal communication.\nAcross the three datasets, HYPERBAND and SMAC early are the only two methods that consistently\noutperform random_2x. On these datasets, HYPERBAND is over 20x faster than random search\nwhile SMAC early is < 7x faster than random search within the evaluation window. In fact, the first\nresult returned by HYPERBAND after using a budget of 5R is often competitive with results returned\nby other searchers after using 50R. Additionally, HYPERBAND is less variable than other searchers\nacross trials, which is highly desirable in practice (see AppendixJA]for plots with error bars).\nIn this experiment, we use HYPERBAND with data samples as the resource to optimize the hyper-\nparameters of a kernel-based classification task on CIFAR-10. We use the multi-class regularized\nleast squares classification model which is known to have comparable performance to SVMs\n& Klautau 2004} Agarwal et al. 2014) but can be trained significantly faster. The hyperparameters\nconsidered in the search space include preprocessing method, regularization, kernel type, kernel\nlength scale, and other kernel specific hyperparameters (see Appendix [A] for more details). Hy-\nPERBAND is run with 7 = 4 and R = 400, with each unit of resource representing 100 datapoints.\nSimilar to previous experiments, these inputs result in a total of 5 brackets. Each hyperparameter\noptimization algorithm is run for ten trials on Amazon EC2 m4 .2x1large instances; for a given\ntrial, HYPERBAND is allowed to run for two outer loops, bracket s = 4 is repeated 10 times, and all\nother searchers are run for 12 hours.\nTest Error\n\nhyperband\nSMAC\n\nTPE\n\nrandom\nrandom_2x\nbracket s=4\n\n0 100 200 300 400 500 600 700\nMinutes\n40109 S21,\nFigure 4: Average test error of the best kernel\nregularized least square classification model\nfound by each searcher on CIFAR-10. The\ncolor coded dashed lines indicate when the last\ntrial of a given searcher finished.\nmemory. Additionally, we set one unit of resource to be 100 features for an R = 1000, which give:\n5 different brackets with 7 = 4. Each searcher is run for 10 trials, with each trial lasting 12 hour:\non anl-standard-16 machine from Google Cloud Compute. The results in Figure[5|show tha\nHYPERBAND is around 6x faster than Bayesian methods and random search. HYPERBAND perform:\nsimilarly to bracket s = 4. Random_2~x outperforms Bayesian optimization algorithms."}, {"section_index": "6", "section_name": "4.4 EXPERIMENTAL DISCUSSION", "section_text": "the maximum speedup offered by HYPERBAND compared to random search is ~-,\u2014\u20143,\u2014,.. Fo\nIf training time is superlinear as a function of the resource, then HYPERBAND can offer higher\nspeedups. More generally, if training scales like a polynomial of degree p > 1, the maximum speedup\nof HYPERBAND over random search is approximately won Uce(\u00ae)J Hence, higher speedups\n\nwere observed for the the kernel least square classifier experiment discussed in Sectio:\nthe training time scaled quadratically as a function of the resource.\nIf 10 randomly sampled configurations is sufficient to find a good hyperparameter setting, then the\nbenefit of evaluating orders of magnitude more configurations is muted. Generally the difficulty of the\nproblem scales with the dimension of the search space since coverage diminishes with dimensionality\nFor low dimensional problems, the number of configurations evaluated by random search anc\nBayesian methods is exponential in the number of dimensions so good coverage can be achieved; i.e\nif d = 3 as in the features subsampling experiment, then n = O(27 = 8). Hence, HYPERBAND is\nonly 6x faster than random search on the feature subsampling experiment. For the neural network\nexperiments however, we hypothesize that faster speedups are observed for HYPERBAND because the\ndimension of the search space is higher."}, {"section_index": "7", "section_name": "5 FUTURE WORK", "section_text": "We have introduced a novel bandit-based method for adaptive configuration evaluation with demon\nstrated competitive empirical performance. Future work involves exploring (i) embedding HYPER\nBAND into parallel and distributed computing environments; (ii) adjusting for training methods with\ndifferent convergence rates; and (iii) combining HYPERBAND with non-random sampling methods.\n0.65\n\n0.60\n\n0.55\n\n0.50\n\n0.45\n\n0.40\n\nhyperband\nSMAC\n\nTPE\nspearmint\nrandom\nrandom_2x\nbracket s=4\n\n0\n\n100 200 300 400\n\n500 600 700\nFigure 5: Average test error of the best ran-\ndom features model found by each searcher\non CIFAR-10. The test error for HYPERBAND\nand bracket s = 4 are calculated in every eval-\nuation instead of at the end of a bracket.\n\u2018or a given R, the most exploratory SUCCESSIVEHALVING round performed by HYPERBAND\nvaluates 7!!\u00b08n(*)) configurations using a budget of ( [log,, (R)| + 1)R, which gives an upper bound\nmn the potential speedup over random search. If training time scales linearly with the resource,\nplleen(\u00ae)|\nhe maximum speedup offered by HYPERBAND compared to random search is Cos (RD: ame F or\nn\nhe values of 7 and R used in our experiments, the maximum speedup over random search is\nipproximately 50x given linear training time. However, we observe a range of speedups from 6x to\n(0.x faster than random search. The differences in realized speedup can be explained by two factors:\n1) the scaling properties of total evaluation time as a function of the allocated resource and (2) the\nlifficulty of finding a good configuration.\nof HYPERBAND over random search 1s approximately \u2014\u2014\u2014\u20147n'\u00bb\"*\"\"/). Hence, higher speedups"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. In JMLR, 2012.\nJ. Bergstra et al. Algorithms for hyper-parameter optimization. In NJPS, 2011.\nA. Gyorgy and L. Kocsis. Efficient multi-start strategies for local search algorithms. JAJR, 41, 2011\nF. Hutter, H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm\nconfiguration. In Proc. of LION-5, 2011.\n\nK. Jamieson and R. Nowak. Best-arm identification algorithms for multi-armed bandits in the fixed\nconfidence setting. In Information Sciences and Systems (CISS), 2014 48th Annual Conference on,\npp. 1-6. IEEE, 2014.\n\nK. Jamieson and A. Talwalkar. Non-stochastic best arm identification and hyperparameter optimiza-\ntion. In AJSTATS, 2015.\n\nA. Klein, S. Falkner, S. Bartels, P. Hennig, and F. Hutter. Fast bayesian optimization of machine\nlearning hyperparameters on large datasets. arXiv preprint arXiv: 1605.07079, 2016.\n\nA. Krizhevsky. Learning multiple layers of features from tiny images. In Technical report, Department\nof Computer Science, Univsersity of Toronto, 2009.\n\nT. Krueger, D. Panknin, and M. Braun. Fast cross-validation via sequential testing. Journal of\nMachine Learning Research, 16:1103\u20141155, 2015.\n\nH. Larochelle et al. An empirical evaluation of deep architectures on problems with many factors of\nvariation. In JCML, 2007.\n\nL. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. Hyperband: A novel bandit-\nbased approach to hyperparameter optimization. arXiv: 1603.06560, 2016.\n\nO. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and\nfunction approximation. In N/PS, 1993.\n\nY. Netzer et al. Reading digits in natural images with unsupervised feature learning. In NJPS\nWorkshop on Deep Learning and Unsupervised Feature Learning, 2011.\n\nA. Rahimi and B. Recht. Random features for large-scale kernel machines. In N/PS, 2007.\nG. Ratsch, T. Onoda, and K.R. Miiller. Soft margins for adaboost. Machine Learning, 42:287-320,\nR. Rifkin and A. Klautau. In defense of one-vs-all classification. JMLR, 2004.\nEne BEE EN IVE Ey GUI Eo\n\nL. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. Hyperband: A novel bandit-\nbased approach to hyperparameter optimization. arXiv: 1603.06560, 2016.\n\nO. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and\nfunction approximation. In N/PS, 1993.\n\nY. Netzer et al. Reading digits in natural images with unsupervised feature learning. In NJPS\nWorkshop on Deep Learning and Unsupervised Feature Learning, 2011.\nA. Sabharwal, H. Samulowitz, and G. Tesauro. Selecting near-optimal learners via incremental data\nallocation. In AAAI, 2016.\n\nP. Sermanet, S. Chintala, and Y. LeCun. Convolutional neural networks applied to house numbers\ndigit classification. In JCPR, 2012.\n\nJ. Snoek, H. Larochelle, and R. Adams. Practical bayesian optimization of machine learning\nalgorithms. In NIPS, 2012.\nJ. Snoek et al. Bayesian optimization using deep neural networks. In JCML, 2015\n\u2018. Swersky, J. Snoek, and R. Adams. Multi-task bayesian optimization. In NJPS, 2013\nK. Swersky, J. Snoek, and R. P. Adams. Freeze-thaw bayesian optimization. arXiv: 1406.3896, 2014"}, {"section_index": "9", "section_name": "A.1 COMPARISON WITH CVST", "section_text": "We replicated the classification experiments in (2015) that train a support vecto:\nmachine on the datasets from the IDA benchmark (Ratsch et al.{/2001). All experiments were\nperformed on Google Cloud Compute\u2019s nl-standard-1 instances. Following |Krueger et al\n(2015), we evaluated HYPERBAND and CVST on the same 2d grid of 610 hyperparameters anc\nrecorded the best test error and duration for 50 trials . The only modification we made to their origina\u2019\nexperimental setup was the data splits; instead of half for test and half for training, we used 1/1 1th fot\ntest and 10/11th for training. HYPERBAND performed holdout evaluation using 1/10th of the training\ndata as the validation set. We set 7 = 3, and R was set for each dataset so that a minimum resource\nof 50 datapoints is allocated to each configuration. Table [2] shows that CVST and HYPERBAND\nachieve comparable test errors (the differences are well within the error bars), while HYPERBAND is\nsignificantly faster than CVST on all datasets. More granularly, while CVST on average has slightly\nlower mean error, HYPERBAND is within 0.2% of CVST on 5 of the 7 datasets. Additionally, for\neach of the 7 datasets. HYPERBAND does as well as or better than CVST in over half of the trials."}, {"section_index": "10", "section_name": "A.2. LENET EXPERIMENT", "section_text": "| Hyperparameter Scale | Min | Max |\n| Learning Rate log Te-3 | le-I |\n| Batch size log Tel | 1e3 |\n| Layer-2 Num Kernels (k2) | linear | 10 60 |\n| Layer-I Num Kernels (k1) | linear | 5 k2 |\nTable 3: Hyperparameter space for the LeNet application of Section|3.2} Note that the number o\nkernels in Layer-1 is upper bounded by the number of kernels in Layer-2.\nThe CVST algorithm from focuses on speeding up standard k-fold cross-\nvalidation. We did not include it as one of the competitors in Section|4] because the experiments\nwe selected were too computational expensive for multi-fold cross-validation and CVST is not an\nany time algorithm. Nonetheless, the CVST algorithm is an interesting approach and was shown to\n\nhave promising empirical performance in{Krueger et al.|(2015). Hence, we performed a small scale\n\ncomparison modeled after their empirical studies between CVST and HYPERBAND.\nCVST Hyperband Ratio\nDataset Test Error Duration Test Error Duration | Duration\n\nbanana 9.8%+\u00a31.6% 12.3+5.0 | 9.9%+1.5% 1.8+0.1 | 6.7\u00a32.8\ngerman | 26.0%+4.5% 27.6%+4.8% 0.740.0 | 4.1+1.7\nimage 2.9%+1.1% 3.3%+1.4% 1.040.0 | 3.4+0.9\nsplice 8.6%+1.8% 8.7%+\u00a31.8% 3.940.1 | 2.740.8\nringnorm | 1.4%+0.4% 1.5%+40.4% 6.5+0.3 | 3.30.4\ntwonorm | 2.4%+0.5% 2.4%+0.5% 6.50.2 | 4341.5\nwaveform | 9.3%+1.3% 9.5%+1.3% 2.940.2 | 4.8+1.0\nTable 2: The test error and duration columns show the average value plus/minus the standard deviation\nacross 50 trials. Duration is measured in minutes and indicates how long it took each method to\nevaluate the grid of 610 hyperparameters used in|Krueger et al.](2015). The ratio column shows the\nratio of the duration for HYPERBAND over that for CVST with associated standard deviation.\nWe trained the LeNet convolutional neural network on MNIST using mini-batch SGD. Code is\n\navailable for the network at/http: //deeplearning.net/tutorial/lenet.html! The\n\nsearch space for the LeNet example discussed in Section|3.2]is shown in Table[3}\nFor the experiments discussed in Section|4.1] the exact architecture used is the 18% model provided\non cuda-convnet for CIFAR- 10P]\nHyperparameter Scale Min Max\nLearning Parameters\nInitial Learning Rate log 51075 5\nConv] Jy Penalty log 5*1075 5\nConv2 ly Penalty log 5*1075 5\nConv3 ly Penalty log 51075 5\nFC4 ly Penalty log 5*10-3 500\nLearning Rate Reductions integer 0 3\nLocal Response Normalization\nScale log 510-6 5\nPower linear 0.01 3\nTable 4: Hyperparameters and associated ranges for the three-layer convolutional network\nSearch Space: The search space used for the experiments is shown in Table[4] The learning ra\nreductions hyperparameter indicates how many times the learning rate was reduced by a factor of |\nover the maximum iteration window. For example, on CIFAR-10, which has a maximum iteration\n30,000, a learning rate reduction of 2 corresponds to reducing the learning every 10,000 iterations, f\na total of 2 reductions over the 30,000 iteration window. All hyperparameters with the exception \u00ab\nthe learning rate decay reduction overlap with those in|Snoek et al.| (2012). Two hyperparameters\nSnoek et al. were excluded from our experiments: (1) the width of the response normalizatic\nlayer was excluded due to limitations of the Caffe framework and (2) the number of epochs ws\n>xcluded because it is incompatible with dynamic resource allocation.\nDatasets: CIFAR-10 and SVHN contain 32 x 32 RGB images while MRBI contains 28 x 28 grayscale\nimages. For all datasets, the only preprocessing performed on the raw images was demeaning. Fo!\nCIFAR-10, the training (40,000 instances) and validation (10,000 instances) sets were sampled from\ndata batches 1-5 with balanced classes. The original test set (10,000 instances) is used for testing\nFor MRBI, the training (10,000 instances) and validation (2,000 instances) sets were sampled frorr\nthe original training set with balanced classes. The original test set (50,000 instances) is used fo.\ntesting. Lastly, for SVHN, the train, validation, and test splits were created using the same procedure\n\nas that in/Sermanet et al.] (2012).\nComputational Considerations: The experiments took the equivalent of over 1 year of GPU hour\non NVIDIA GRID K520 cards available on Amazon EC2 g2.8xlarge instances. We set a tota\nbudget constraint in terms of iterations instead of compute time to make comparisons hardwar\n| Comparing progress by iterations instead of time ignores overhead costs not associate\nwith training like cost of configuration selection for Bayesian methods and model initializatior\nand validation costs for HYPERBAND. While overhead is hardware dependent, the overhead fo\nHYPERBAND is below 5% on EC2 g2 . 8xlarge machines, so comparing progress by time passe:\nwould not impact results significantly.\nDue to the high computational cost of these experiments, we were not able to run all searcher:\nout to convergence. However, we did double the budget for each trial of CIFAR-10 to allow for\ncomparison of the searchers as they near convergence. Figure|6|shows while Bayesian optimizatior\nmethods achieve similar performance as HYPERBAND and SUCCESSIVEHALVING, it takes then\nmuch longer to achieve a comparable error rate.\nComparison with Early Stopping: Adaptive allocation for hyperparameter optimization can be\nthought of as a form of early stopping where less promising configurations are halted before comple-\ntion. |Domhan et al.|(2015) propose an early stopping method for neural networks and combine it\n~The model specification is available at ht tp\nMost trials were run on Amazon EC2 g?2.\ndue to the large computational demand of these experiment:\na ates ieee om\n\nLearning Parameters\n\nInitial Learning Rate\n\nConv! /z Penalty\n\nConv2 lz Penalty\n\nConv3 Iz Penalty\n\nFC4 lp Penalty\n\nLearning Rate Reductions\nLocal Response Normalization\n\nScale\n\nPower\n\nlog\nlog\nlog\nlog\nlog\ninteger\n\nlog\nlinear\n\nOOM\n\nwun\nAverage Test Error\n\n\u00b0\niv\noo\n\n\u00b0\niv\na\n\n\u00b0\nie)\nES\n\n\u00b0\nNv\nNn\n\n%-\u2014\u00ae hyperband (finite) + spearmint\n= SMAC -4srandom\n\n| 3-4 SMAC (Early Stop) +} random_2x\nt4 TPE 4 bracket s=4\n\ni+) 20 40 60 80 100\n\nMultiple of R Used\n(a) CIFAR-10\n\ny ra 0.09\n\nT 7 0.08\n\n0.07\n\n0.06\n\n0.05\n\nAverage lest Error\n\n10 20 30 40 50 ie}\nMultiple of R Used\n\n(b) MRBI\n\n10 20 30 40\nMultiple of R Used\n\n(c) SVHN\nFigure 6: Average test error across 10 trials is shown in all plots. Error bars indicate the maximum\nand minimum ranges of the test error corresponding to the model with the best validation error\nwith SMAC to speed up hyperparameter optimization. Their method stops training a configuration\nif the probability of the configuration beating the current best is below a specified threshold. This\nprobability is estimated by extrapolating learning curves fit to the intermediate validation error losses\nof a configuration. If a configuration is terminated early, the predicted terminal value from the\nestimated learning curves is used as the validation error passed to the hyperparameter optimization\nalgorithm. Hence, if the learning curve fit is poor, it could impact the performance of the configura-\ntion selection algorithm. While this approach is heuristic in nature, it does demonstrate promising\nempirical performance so we included SMAC with early termination as a competitor. We used the\nconservative termination criterion with default parameters and recorded the validation loss every\n400 iterations and evaluated the termination criterion 3 times within the training period (every 8k\niterations for CIFAR-10 and MRBI and every 16k iterations for SVHN)[7|Comparing performance by\nthe total multiple of R used is conservative because it does not account for the time spent fitting the\nlearning curve in order to check the termination criterion."}, {"section_index": "11", "section_name": "A.4 KERNEL CLASSIFICATION EXPERIMENTS", "section_text": "We trained the regularized least-squares classification model using a block coordinate descent solve\nOur models take less than 10 minutes to train on CIFAR-10 using an 8 core machine, while the defaul\nSVM method in Scikit-learn is single core and takes hours. Table [5] shows the hyperparameter:\nand associated ranges considered in the kernel least squares classification experiment discussed it\n\u2018We used the code provided at https://github.com/automl/pylearningcurvepredictor.\nTable 5: Hyperparameter space for kernel regularized least squares classification problem discussed\nin Section|4.2]\nTest Error\n\nT\n: Fi hyperband\n0.65 | 1 smac\n| 4 TPE\n0.60 \u2018| 4 random\n'| \u00a34 random_2x\n\u2018| L4 bracket s=4\n055 +4 bracket s\n0.50 4\nWs q\n;\n0.45 nas\n0.40 L\n\n1\n0 100 200 300 \u201c400 500 600 70C\nMinutes\nTable|6] shows the hyperparameters and associated ranges considered in the random features kern\u00ab\napproximation classification experiment discussed in Sectio The regularization term X i\ndivided by the number of features so that the tradeoff between the squared error and the / penalt\nwould remain constant as the resource increased. Additionally, the average test error with associate\nminimum and maximum ranges across 10 trials are show in Figure]|8]\nHyperparameter | Type Values\n\npreprocessor Categorical min/max, standardize, normalize\nkernel Categorical rbf, polynomial, sigmoid\n\nCc Continuous log [10~*, 10\u00b0]\n\ngamma Continuous log [10~*, 10]\n\ndegree if kernel=poly integer [2,5]\n\ncoef0 if kernel=poly,sigmoid | uniform [-1.0, 1.0]\nSection The cost term C is divided by the number of samples so that the tradeoff between the\nsquared error and the /2 penalty would remain constant as the resource increased (squared error is\nsummed across observations and not averaged). The regularization term \\ is equal to the inverse of\nthe scaled cost term C. Additionally, the average test error with associated minimum and maximum\nranges across 10 trials are show in Figure[7]\nFigure 7: Average test error of the best kernel regularized least square classification model found\nby each searcher on CIFAR-10. The color coded dashed lines indicate when the last trial of a given\nsearcher finished. Error bars correspond to observed minimum and maximum test error across 10\ntrials.\n0.60\n\nTest Error\n\nhyperband\nSMAC\n\nTPE\nspearmint\nrandom\nrandom_2x\nbracket s=4\n\n0 100 200 300 400 500 600 700\n\nMiniitac\nFigure 8: Average test error of the best random features model found by each searcher on CIFAR-10\nThe test error for HYPERBAND and bracket s = 4 are calculated in every evaluation instead of at\nthe end of a bracket. Error bars correspond to observed minimum and maximum test error across 10\ntrials."}]
SkXIrV9le
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "The current computer graphics pipelines are the result of efficient implementations required by lim-\nited hardware and high frequency output requirements. These requirements were also achieved with\nthe use of explicit physics and optic constraints and modeling with constantly improving data struc-\n6) and videc\n\nIn machine learning on the other hand, for a long time image\ngenerative models had been investigated with statistical approaches that\nmodel images down to the pixel Onn Hiner MS sometimes assuming neigh-\nborhood statistical dependencies (Osindero & Hinton|/2008). In video prediction, the current state\nof the art uses variations of deep convolutional recurrent neural networks\n(Lotter et al.|/2016) (Finn et al.|{2016).\nAs a parallel to the classic machine learning approach to image and video interpretation and pre\ndiction is a growing trend in the deep learning literature for modeling vision as inverse graphic:\npreted into two groups: supervised and unsupervised vision as inverse graphics. The supervisec\napproach assumes that during training an image is provided with extra information about its rota\ntion, translation, illumination, etc. The goal of the supervised model is to learn an auto-encoder tha\nexplicitly factors out the content of the image and its physical properties. The supervised approacl\n\nis illustrated by|Kulkarni et al.\nThe unsupervised approach requires extra architectural constraints, similar to those assumed in com-\nputer graphics. For example, (2016) modeled the content of a scene with a Generative\nAdversarial Network (Goodfellow et al.||2014) and its location with Spatial Transformer Networks\n@aderberg et al.| 2015). The full model is adapted end-to-end to generate images whose appear-\nance can be changed by independently modifying the what\u201d and/or \u2019where\u201d variables. A similar\napproach was applied to video generation with volumetric convolutional neural networks (Vondrick|\nfet al.] [2016).In two papers by Google DeepMind (2016) they\nimproved the \u2019where\u201d representations of the unsupervised approach and modeled the 3D geometry\nof the scene. This way they explicitly represented object rotation, translation, camera pose, etc.\nTheir approaches were also trained end-to-end with REINFORCE-like stochastic gradients to back-\n\npropagate through non-differentiable parts of the graphics pipeline (Rezende et al.|/2016) or to count"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "convolution result\n\nconvolution\n\nspatial transformer\n\no 0 0 0 4 50 BD\nspatial transformer result\nOther approaches inspired by the graphics pipeline and computer vision geometry in machine learn.\ning uses the physics constraints to estimate the depth of each pixel in the scene and camera pos\u00a2\n\nmovements to predict frames in video (Mahjourian et al} 2016) (Godard et al. 2016).\nThe present paper is closer to the unsupervised approach of vision as inverse graphics. More pre-\ncisely, here we investigate frame prediction in video. Contrary to the work by[Reed et al.|(2016) here\nwe first limit ourselves to simple synthetic 2D datasets and learning models whose representations\ncan be visually interpreted. This way we can investigate exactly what the neural network is learning\nand validate our statistical assumptions. Also, we investigate the behavior of Spatial Transformer\nNetworks and question it as the default choice when limited compute resources are available and no\nscale invariance is required.\nFirst in the next Section we will pose a statistical model that is appropriate for machine learning but\ninspired by the graphics pipeline.\nThis section starts with a high level description of the 2D graphics pipeline, followed by a discussiot\nof how to implement it with neural network modules, and finally we define a formal statistical model\nThe 2D graphics pipeline starts from geometric primitives and follows with modeling transforma\ntions, clipping, viewing transformations and finally scan conversion for generating an image. Here\nwe will deal with previously rasterized bitmaps, i.e. sprites, and will model the translation transfor\nmations, rotation and clipping with differential operations. This way, the steps in the pipeline cai\nbe defined as layers of a neural network and the free parameters can be optimized with backpropa\ngation.\nFigure 1: How to get similar results using convolutions with delta-functions and spatial transformers.\nInput sprite is 8 x 8 pixels and the outputs are 64 x 64 pixels. Note that in the convolution the result\nshape is rotated 180 degrees and its center is where the delta equals to one at pixel (x = 16, y = 16).\nNote also that the edges of the spatial transformer results are blurred due to bilinear interpolation. A\nmatrix can be read as \u201czoom-out\u201d 8 times and translate up and left in a quarter of the resulting size.\nthe number of objects in the scene (Eslami et al.| Those papers also used Spatial Transformer\nNetworks to model the position of the objects in the scene, but they extended it to 3D geometry so\nit could also model rotation and translation in a volumetric space.\nFor our neural network implementation, we assume a finite set of sprites (later we generalize it to\ninfinite sprites) that will be part of the frames in the video. The image generation network selects a\nsprite, s, from a memorized sprite database S;-;; \u2014 -, using an addressing signal c:\nFor interpretable results it would be optimal to do one-hot memory addressing where c; = 1 fo:\nS; = S and c; = 0 otherwise. Note that (I) is differentiable w.r.t to both c; and $; so we can learr\nthe individual sprites from data. We can for c; to sum to | using the softmax nonlinearity. Thi:\napproach was inspired by the recent deep learning literature on attention modules\n(Graves et al [2014).\nWhen the number of possible sprites is too large it is more efficient to do a compressed represen-\ntation. Instead of using an address value c we use a content addressable memory where the image\ngenerator estimates a code z that is then decoded to the desired sprite with a (possibly nonlinear)\nfunction d(z). If we interpret the addressing value z as a latent representation and the content\naddressable memory d(z) as a decoder, we can use the recent advances in neural networks for gen-\nerative models to setup our statistical model. We will revisit this later in this section.\nThe translation transformation can be modeled with a convolution with a Delta function or usin;\nspatial transformers. Note that the translation of an image I(x, y) can be defined as\nI(a% \u2014Tx,y \u2014 Ty) = I(x, y) * O(a \u2014 Te, y \u2014 Ty),\nwhere x denotes the image convolution operation. Clipping is naturally handled in such a case. If the\noutput images have finite dimensions and 6(a \u2014 Tx, y \u2014 Ty) is non-zero near its border, the translatec\nimage I(a\u00ab \u2014 Tx, y \u2014 Ty) will be clipped. Another way of implementing the translation operation is\nusing Spatial Transformer Networks (STN) (Jaderberg et al.|[2015). An implementation of STN car\nbe defined in two steps: resampling and bilinear interpolation. Resampling is defined by moving the\nposition of the pixels (x, y) in the original image using a linear transform to new positions (%, 7) a:\nWe assume the coordinates in the original image are integers 0 < x < M and0 < y < N, where\nM x N is the size of the image J. Once the new coordinates are defined, we can calculate the values\nof the pixels in the new image J using bilinear interpolation:\nwhere (21, 2, Yi, y2) are integers, 7} < T < ro, y1 < Y < yo and\nWorn = (L#] \u2014 #)(L9] - *)\n\nWor ys = (L#] \u2014 #)(lg] +1-9)\nWoon = (#] +1\u2014#)(\\9] - 9)\nWooy = (L#] \u2014 #)(lg] +1-9)\n[\n\nSr Re\n\nx\n=A | , where\n1\n\nA= Au Ai Ais\nAoi Ao2 Ag3|*\n[(z, 9) = Wey yl (@1, 91) + Wey yet (@1, yo) +\nWep,yi1 (2, Y1) + Wes ysl (x2, y2)\nMZ, 9) = Wey ys T (1, x1) + Wey yl (wr, yo) +\nTo avoid sampling from outside the image we clip the values |%| and || + 1 between 0 and M anc\nthe values |\u00a5| and |j| + 1 between 0 and N. We omitted that in (5) for conciseness. Note that\nis piecewise differentiable w.r.t J.\nWe can define translation through operations with\ncosp sinp 0\n\u2014sinp cosp O|\u00b0\n\nA=\nConsidering the tools defined above, we can define a statistical model of 2D images the explicitly\nrepresents sprites and their positions in the scene. We can use the free energy of this statistical model\n\nto optimize a neural network. Let us start with a static single frame model and later generalize it to\nvideo.\nLet an image I ~ po(JI) be composed of sprite s ~ po(s) centered in the (x,y) coordinates in\nthe larger image I. Denote these coordinates as a random variable J;, ~ pg, where 6 are the\nmodel parameters. pg(dx,) can be factored in two marginal categorical distributions Cat(d,,) and\nCat(\u00e9,) that models the probability of each coordinate of the sprite independently. For the finite\nsprite dataset, pg(s) is also a categorical distribution conditioned on the true sprites. For this finite\ncase the generative model can be factored as\npo(I, 8,5) = po(s)po(Sxy)p(1|s, dey);\n10\u201d,\n0 1 ty\n\n|:\nImage rescaling is achieved on that framework by rescaling in the right square submatrix Ay.9 1-2.\nWe illustrate in Fig. [I]how to get similar results using convolutions with a delta-function and spatial\ntransformers.\npo(s, O|L) = po(s|I)p(Sey|L)\nis tractable. One could use for instance Expectation-Maximization or greedy approaches like Match-\ning Pursuit to alternate between the search for the position and fitting the best matching shape. For\nthe infinite number of sprites case, we assume that there is a hidden variable z from which the sprites\nare generated as p( = po(z)p0(s\\|z). In such case our full posterior becomes\npo(z,8,6|L) = po(z, s\\1)peylZ) =\npo(2\\I)pe(s|I, z)p(dxy|Z)-\npo(z, 8, 5|I) = po(z, s\\1)pOeylZ) =\npo(z|I)pe(s|I, 2) p(y |L).\nWe can simplify assuming po(z|s) = pe(z|I) for simple images without ambiguity and no\nsprite occlusion. For a scalable inference in the case of unknown @ and z and intractable po(z|s)\nwe can use the auto-encoding variational Bayes (VAE) approach proposed by{Kingma & Welling|\n(2013) Using VAE we define an approximate recognition model g4(z|s). In such case, the log-\n\noe c.g ae oo iT: a\nog po (Ti) = Dx (4o(2|8:)||P0(z]81))+\nDxx(po(2|si)||po(z|Li))+\nL(O, 9, dey, Ii).\nLO, 6, 6,1) = \u2014Der(de(2|8:)||Po(2))+\nEq 4(z\\s,5)po(6|1) Log pa(Z|z, 9),\nRNN\n\nsprites\n\ntranslate\n\nrotate\n\n>\n\nAdd\nBackground\n\nhea\nFigure 2: A schematic block diagram for a Perception Updating Network. This configuration uses\nboth convolutions with delta functions for translation and spatial transformers for rotation. It alsc\nshows the optional background underlay.\nz= mg(L) + vg(Z) - &,\nwhere \u20ac ~ N'(0,o1), Lis the identity matrix, the functions m(J) and u(J) are deep neural network\nlearned from data.\nOne can argue that given z and a good approximation to the posterior qg, estimating 6 is still\ntractable. Nevertheless, we preemptively avoid Expectation-Maximization or other search ap-\nproaches and use instead neural network layers L,. and I,,:\ndxy = softmax(I,(I)) \u00ae softmax(1,(Z)),\nwith \u00ae denoting the outer product of marginals. We also experiment using STNs. Such amortizec\ninference is also faster in training and test time than EM and will also cover the case where J i:\nitself a learned low dimensional or latent representation instead of an observable image. Bear this ir\nmind while we use this approach even in simple experiments such as those with moving shapes ir\nthe Experiments Section. This will help us to understand what can be learned from this model.\nWe extend the model above to videos, i.e. sequences of images I(t) = {I(0), I(1),...}, assuming\nthat the conditional log-likelihood log po (Ii|H1,) = logpe (Ii|Hs,, Hz,) follows (11), where H,\nis the history of video frames prior to time point t. Also Hs, and H-, are the history of position\ncoordinates and the history of latent variables of the sprites respectively. We should observe that one\ncan make the assumption that the sprites don\u2019t change for a given video I(t) and only estimate one\nsprite s;=9 or hidden variable z,29. This assumption can be useful for long term predictions, but\nrequires that the main object moving in the scene doesn\u2019t change.\nIn the next section, we propose a neural network architecture for maximizing our approximate vari\national lower bound 2D videos."}, {"section_index": "2", "section_name": "3. PERCEPTION UPDATING NETWORKS", "section_text": "Here an input frame at time t, [;, is fed to the RNN that emits 2 signals: a memory address that\nselects a relevant sprite and transformation parameters. If we are doing the translation transformation\nusing convolutions and delta functions this output is equal to (14). If using STN, the translation\noperation returns the matrix A used in Note that we could use both, letting convolutions with\n6 to the translation is constraining A as in (7) to do rotation transformations only. We describe the\ngeneral case where both 6,,,, and STNs are used in Algorithm 1.\nwhere we dropped the subindices xy and i to avoid clutter. Here we would like to train our model by\nmaximizing the lower bound 12), again inspired by VAE. We can do so using the reparametrization\ntrick assuming g4(z|s) and the prior pg(z) to be Gaussian and sampling\nThis Section proposes a family of neural architectures for optimizing the lower bound (12). A\nschematic diagram is represented in Fig. (2). The core of our method is a Recurrent Neural Network\n(RNN) augmented with task specific modules, namely a sprite addressable memory and modeling\niransformations layers. RNNs augmented with task specific units were popularized by|Graves et al.\nin the context of learning simple differentiable algorithms and served as inspiration for us as\nwell. Here since we explicitly model the perceived sprites as s or z and update it and its locatior\nand/or rotation though time we decided to call our method simply Perception Updating Networks.\nBeyond deciding between STNs vs 5,,, a few other free parameters of our method are the type ot\nRNN (e.g. vanilla RNN, LSTM, GRU, ConvRNN, etc), the number of neurons in the hidden state of\nthe RNN and neural network architectures that infer the correct sprite and modeling transformation\nparameters. Our hyperparameter choices are investigated separately in each experiment in the next\nSection.\nData: input videos J;,t \u20ac {0,1,2,...}, initial RNN state ho, neural network layers mg, v4, d, 1, f\nResult: video predictions J,.t \u20ac {1.2.3....}\ndxy = softmax (I, (h;)) \u00ae softmax(, (he)\n\np= fhe)\nA\u2014| 8 sinp 0\n~ |\u2014sinp cosp 0\n& ~ po(z)\na= me(he) + ug(hi) -\u20ac\n8, = d(zt)\n= STN(s, A)\n\nTiga = a4 Sey\n\nTina = wha + (1-1) B\naA\nIn the next section we present experiments with the proposed architecture on synthetic datase'"}, {"section_index": "3", "section_name": "4 EXPERIMENTS", "section_text": "In this section we experiment with several implementations of the proposed Perception Updating\nNetworks. We start with a simple synthetic dataset made of videos where one of 3 moving shape:\nmoves with constant speed bouncing in the edges of an image. This illustrates the working of the\nfinite memory and the addressing scheme in (1). Afterwards we show results on the moving MNIST\n\ndataset (Srivastava et al. commonly used in the literature of generative neural network model:\n\nof videos."}, {"section_index": "4", "section_name": "4.1 BOUNCING SHAPES", "section_text": "In this first experiment we generate videos of one of three shapes moving on a non-zero background\nThe shapes are a square, triangle and cross. The image size is 20 x 20 pixels and the shapes are 8 x \u00a7\npixels. The pixel values are between 0) and 1. The shapes are picked with equal probability and they\nmove at constant speed of | pixel per frame. The shapes start from random initial positions with anc\nstart moving in random directions as well.\nAlgorithm 1: Perception Updating Networks. STN denotes spatial transformer operator {\nx denotes convolution. We experimented with several variations of this algorithm, mainly changing\nif and how the \u201cwhere\u201d modules 5,,,, and STN are used. Also changing how the sprite s; is calculated\nand not using a background B when not necessary.\nWe tested two implementations of the proposed architecture: one using only convolutions, referred\n\u20180 as convolutional PUN in the figures, and another using using spatial transformers, called spatial\nransformer PUN. For the parameters of the convolutional PUN the RNN used was a Long Short\nTerm Memory (LSTM) with 100 cells. The RNN in the Spatial Transformer PUN had 256 cells. In\nhe convolutional PUN, the location layers used to calculate 5,,, 1, and 1, output vectors of size 20\nnixels and we used the finite addressable memory described in (i). The background is also learned\nfrom data as weights of neural network. This background served to make the task more difficult and\nforce the network to avoid just exploiting any non-zero value. After the convolutional composition\nI, = 8; x dzy, we added the background to form a new image using I, = ys - I; + (1 \u2014 4) B, where\nuw is a differentiable mask that accounts for the \u201ctransparency\u201d of the image J;. B is the learned\n\n20 x 20 pixels background image. For complex shapes this mask shape could be calculated as\nFigure 3: Results on the Bouncing Shapes dataset. Three 8x8 sprites (a square, a cross and a triangle\nwere used to generate videos. The shapes move in a 20x20 pixels canvas with a Toeplitz backgrounc\nand bounce on the corners. a) We show one step ahead predictions with the compared methods. b\nWe also show the learned sprites for the convolutional implementation of the proposed Perceptiot\nUpdating Networks when we over- and under-estimate the size of the desired sprites.\nIn the following experiments, the training videos were 10 frames long. At test time the network is\nfed the first 10 frames of a video and asked to predict the next 10. Results for the compared method:\nare shown in Fig. ??. For the baseline method, we did a hyperparameter search on conventiona\nLSTMs with a single linear output layer until we found one that had comparable results at test time\nThat network had 256 hidden cells. Also, note that although the scale of the mean square error is\nthe same, the results from our proposed architecture look smoother than those learned by the LSTM\nas shown in Fig.\nGiven such a simple experiment, it is elucidating to visualize values learned by each piece of the\nnetwork. As expected the sprite memory learned the 3 investigated shapes in transposed order since\nthey are reverted by the convolution operation to compose the frame. We also experimented with\nchoosing the size of the learned sprites s; smaller and larger than the true shapes. We observed that\nfor larger shapes such as 10 x 10 the sprites converge to the correct shapes but just using part ot\nthe pixels. For smaller shapes such as 6 x 6 pixels, instead of learning a part of the correct shape\nthe convolutional Perception Updating Network learned to compensate for the lack of enough pixels\nwith more than one non-zero value in the location operation J, (see Fig. [3). This allow us tc\nsuggest to the interested practitioner that in order to get interpretable results it is better to use sprites\nlarger than the expected size than smaller.\nFor the spatial transformer PUN the image is calculated as (see Algorithm 1 for context)\nA= f(hi),\nTiga = STN(s1, A).\nWe noticed that the spatial transformer PUN was not able to learn the training videos using ar\nequivalent architecture to the convolutional PUN one. We had to use multiple layers to define the\nfunction f(h,). In other words, in the convolution based method 6,, can be estimated by a single\naffine transformation of the state h, but A cannot. We also had to use smaller learning rates tc\na) one step ahead prediction\n\nconvolutional PUN ee\nSe Se\n\u2014 ST\n\nground truth\n\nspatial transformer PUN\n\nb) convolutional PUN learned sprites\n10x10 sprites 6x6 sprites\n\nTEA =\n\nsample 65,y when sprites 10x10 sample 0,, when sprites are 6x6\nrau\neae\u00bb WY\nMSE\n\n025\n\n020\n\n0.05\n\n100\nepochs\n\n\u2014 69\nLSTM\n\u2014 STNPUN\n\n150\n\n200\nMSE\n\n025\n\n020\n\n0.05\n\n100\nepochs\n\n\u2014 69\nLSTM\n\u2014 STNPUN\n\n150\nFigure 5: Sample rollouts of a 2 layer LSTM convolutional Perception Updating Network with\nguarantee convergence: 0.0001 for STN while the 5,,,-based model worked with a value 10 times\nlarger.\nIf we don\u2019t use the softmax nonlinearity to construct 5,, the representations learned by the con-\nvolutional PUN are no longer visually interpretable. It is interesting to conclude that under this\nframework the \u201cwhat\u201d and \u201cwhere\u201d can only be distinguished if we impose architectural constraints.\nThe reason is the commutative property of the convolution operation.\nAs a note on rotation, we ran experiments where the sprite are rotated by a random angle before\nbeing placed in the image. This new type of videos cannot be learned using only convolutional\nbased Perception Updating Networks unless we increase the number of sprites proportionally to the\nnumber of possible angles. Spatial transformer based Perception Updating Networks can handle this\nnew type of video naturally. Nevertheless, if the number of rotation angles is finite or can be dis-\ncretized we found that we could learn to generate the videos faster if we combined the convolutional\napproach with a mechanism to select the appropriate angle from a set of possibilities. Results on\nthis experiment are not shown in this paper due to space constraints but they can be reproduced with\nthe companion code."}, {"section_index": "5", "section_name": "4.2 MOVING MNIST", "section_text": "The Moving MNIST benchmark uses videos generated by moving 28 x 28 pixel images of hand writ-\nten digits in a 64 x 64 pixels canvas. Just like in the Bouncing Shapes dataset, the digits move with\ndifferent different speeds in different directions and can bounce in the walls. Unlike the Bouncing\nShapes dataset, there are 60000 different sprites for training and 10000 for test, making it impracti-\ncal to use a discrete memory module. Instead, we use the memory representation denoted by\nfollowed by s; = d(z;) as written in Algorithm 1.\nWe trained a convolutional Perception Updating Network using 2 layer LSTMs each one with 1024\ncells for 200 epochs, with 10000 gradient updates per epoch. The latent variable z had 100 dimen-\nsions and the decoder d(-) was a single hidden layer MLP with 1000 hidden neurons and softplus\nFigure 4: Learning curves in the test task of two implementations of the proposed architecture (conv\nPUN and STN PUN) and an equivalent LSTM baseline. Note that the spatial transformer based\nPUN was not able to generalize to the test set, ie. they did not work well for generating videos\nwhen getting its own previous outputs as next step inputs.\new 2 te 6 6 6 6 t 6\n8\n\nme mm Os tatu\nSee i tea aa\n\n4\nbee eEEE EES\n\nFy\na a1 8787978455445 %4 1\noo eee ry ee aw\nactivation function. The output layer of this MLP has 784 neurons, which is the size of an MNIST\nimage, and sigmoid activation function. In the test set we obtained a negative log-likelihood of 236\nnats with the proposed architecture, while a 2 layer LSTM baseline had 250 nats. Note that the ou\nmethod was optimized to minimize the lower bound (12), not only the negative likelihood. Thes\u00a2\nresults are not as good as those obtained by the Video Pixel Networks (Kalchbrenner et al.|/2016]\nthat obtained 87 nats on the test set. Nevertheless, both approaches are not mutually exclusive anc\ninstead of a fully connected decoder we could use a similar PixelCNN decoder to generate sprite:\nwith higher likelihood. In this first paper we decided instead to focus in defining the statistica\nframework and interpretable \u201cwhat\u201d and \u201cwhere\u201d decoupling.\nWhen running the proposed method in rollout mode, feeding the outputs back as next time step\ninputs, we were able to generate high likelihood frames for more time steps than with a baseline\nLSTM. Also, since the sprite to be generated and its position in the frame are decoupled, in rollout\nmode we can fix the sprite and only use the d,,, coming from the network. This way we can generate\nrealistic looking frames for even longer, but after a few frames we observed the digits stopped\nmoving or moved in the wrong direction (see video in the companion code repository). This means\nthat the LSTM RNN was not able to maintain its internal dynamics for too long, thus, there is still\nroom for improvement in the proposed architecture.\nIn Fig. [S]we show sample rollout videos. The network was fed with 10 frames and asked to generate\n10 more getting its own outputs back as inputs and the companion code repository for an animated\nversion of this figure.\nThis experiment also suggests several improvements in the proposed architecture. For example, we\nassumed that the internal RNN has to calculate a sprite at every time step, which is inefficient when\nthe sprites don\u2019t change in the video. We should improve the architecture with an extra memory\nunity that snapshots the sprites and avoid the burden of recalculating the sprites at every step. We\nbelieve this would a possible way to free representation power that the internal RNN could use to\nmodel the movement dynamics for even more time steps."}, {"section_index": "6", "section_name": "5 CONCLUSIONS", "section_text": "This paper introduced a statistical framework for modeling video of 2D scenes inspired by graphic:\npipelines and variational auto-encoding Bayes. From this statistical framework we derived a vari.\national lower bound that decouples sprites and their dynamics in a video. To optimize this lowe:\nbound, we suggested a family of architectures called Perception Updating Networks that can take\nadvantage of this decoupled representation by memorizing sprites or their percepts and updating ir\nlocation in a scene independently. We showed that this architecture could generate videos that are\ninterpretable and are better suited than baseline RNNs for long video generation."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Ryan Burt for several suggestions to the first draft. This work was partially funded by the\nUniversity of Florida Graduate Student Fellowship and ONR N00014-14-1-0542."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl;\nlearning to align and translate. arXiv preprint arXiv: 1409.0473, 2014.\nJarmo Hurri and Aapo Hyvarinen. Simple-cell-like receptive fields maximize temporal coherenc\nin natural video. Neural Computation, 15(3):663-691, 2003.\nMax Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Ad\nvances in Neural Information Processing Systems. pp. 2017-2025. 2015.\nNal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex\nGraves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv: 1610.00527, 2016.\nBruno A Olshausen et al. Emergence of simple-cell receptive field properties by learning a sparse\ncode for natural images. Nature, 381(6583):607\u2014609, 1996.\nScott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn\ning what and where to draw. arXiv preprint arXiv: 1610.02454, 2016.\nDanilo Jimenez Rezende, SM Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nico\nlas Heess. Unsupervised learning of 3d structure from images. arXiv preprint arXiv: 1607.00662\n2016.\nEero P Simoncelli and Bruno A Olshausen. Natural image statistics and neural representation\nAnnual review of neuroscience, 24(1):1193-1216, 2001.\nNitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of videc\nrepresentations using Istms. CoRR, abs/1502.04681, 2, 2015.\nan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair\nAaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor\nmation Processing Systems, pp. 2672\u20142680, 2014.\nsimon Osindero and Geoffrey E Hinton. Modeling image patches with a directed hierarchy ot\nmarkov random fields. In Advances in neural information processing systems, pp. 1121-1128.\n2008.\nPeter Shirley, Michael Ashikhmin, and Steve Marschner. Fundamentals of computer graphics. CRC\nPress, 2015."}]
B184E5qee
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Language models, which are probability distributions over sequences of words, have many appli\ncations such as machine translation (Brown et al.|{T993), speech recognition (Bahl et al.|{1983) o\ndialogue agents (Stolcke et al.|2000). While traditional neural networks language models have ob\ntained state-of-the-art performance in this domain (Jozefowicz et al.| {2016} [Mikolov et al. 2010)\nthey lack the capacity to adapt to their recent history, limiting their application to dynamic environ\nments . A recent approach to solve this problem is to augment these network:\n(2015). These models can potentially use their external memory to store nev\ninformation and ada\n\npt to a changing environment.\nWhile these networks have obtained promising results on language modeling datasets (Sukhbaatai\n), they are quite computationally expensive. Typically, they have to learn a parametrizabk\nmechanism to read or write to memory cells (Graves et al.||2014}Joulin & Mikolov\\/2015). This may\nlimit both the size of their usable memory as well as the quantity of data they can be trained on. Ir\nthis work, we propose a very light-weight alternative that shares some of the properties of memory\naugmented networks, notably the capability to dynamically adapt over time. By minimizing the\ncomputation burden of the memory, we are able to use larger memory and scale to bigger datasets\nWe observe in practice that this allows us to surpass the perfomance of memory augmented network:\non different language modeling tasks.\nOur model share some similarities with a model proposed by (1988), called the cache model.\nA cache model stores a simple representation of the recent past, often in the form of unigrams, and\n\nuses them for prediction (Kuhn & De Mori||1990). This contextual information is quite cheap to\nefficiently. It als\n\nstore and can be accessed \u00a9 does not need any training and can be appplied on\ntop of any model. This makes this model particularly interesting for domain adaptation\n\n(Steinbiss| 1993)\nOur main contribution is to propose a continuous version of the cache model, called Neural Cache\nModel, that can be adapted to any neural network language model. We store recent hidden activation:\nand use them as representation for the context. Using simply a dot-product with the current hidder\nactivations, they turn out to be extremely informative for prediction. Our model requires no training\nand can be used on any pre-trained neural networks. It also scales effortlessly to thousands o!\nmemory cells. We demonstrate the quality of the Neural Cache models on several language mode\ntasks and the LAMBADA dataset (Paperno et al.|/2"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "We propose an extension to neural network language models to adapt their pre-\nliction to the recent history. Our model is a simplified version of memory aug-\nnented networks, which stores past hidden activations as memory and accesses\nhem through a dot product with the current hidden activation. This mechanism is\nvery efficient and scales to very large memory sizes. We also draw a link between\nhe use of external memory in neural network and cache models used with count\nyased language models. We demonstrate on several language model datasets that\nyur approach performs significantly better than recent memory augmented net-\nvorks.\nA language model is a probability distribution over sequences of words. Let V be the size of the\nvocabulary; each word is represented by a one-hot encoding vector x in RY = V, corresponding tc\nits index in the vocabulary. Using the chain rule, the probability assigned to a sequence of word:\nv1.....x27 can be factorized as\nD1, 5 rp) = Tot 1;\nLanguage modeling is often framed as learning the conditional probability over words, given the\n\nhistory (Bahl et al.| 1983).\nThis conditional probability is traditionally approximated with non-parameteric models based on\n\ncounting statistics Goodman] 2001). In particular, smoothed N-gram models Katz} 1987} Kneser &\n\nNey} 1995p achieve good performance in practice (Mikolov et al.|/2011). Parametrized alternatives\nare either maximum entropy language models (Rosenfeld} |1996), feedforward networks (Bengio\n\n2003) or recurrent networks (Mikolov et al.|]2010). In particular, recurrent networks are\n\ncurrently the best solution to approximate this conditional probability, achieving state-of-the-arts\n\nperformance on standard language modeling benchmarks (Jozefowicz et al.||2016}/Zilly et al.\\[2016).\nRecurrent networks. Assuming that we have a vector h, \u20ac R\u201c encoding the history 2;,..., 71\nthe conditional probability of a word w can be parametrized as\nDyocab(W | Lp, -.,21) K exp(h, ow).\nThe history vector h; is computed by a recurrent network by recursively applying an equation of the\nform\nwhere \u00ae is a function depending on the architecture of the network. Several architecture for recur-\n\nrent networks have been proposed, such as the Elman network (Elman 1990p, the long short-term\nmemory (LSTM) (Hochreiter & Schmidhuber 1997) or the gated recurrent unit (GRU) (Chung|\n2014). One of the simplest recurrent networks is the Elman network 1990), where\nhy = 0 (La, + Rhy_1),\nwhere is a non-linearity such as the logistic or tanh functions, L \u20ac R\u00a2*Y is a word embedding\nmatrix and R \u20ac R\u00a2*\u00a2 is the recurrent matrix. The LSTM architecture is particularly interesting in\n\nthe context of language modelling (Jozefowicz et al.||2016) and we refer the reader to|Graves et al.\n\n(2013) for details on this architecture.\nCache model. After a word appears once in a document, it is much more likely to appear again\nAs an example, the frequency of the word tiger on the Wikipedia page of the same name is 2.8%\ncompared to 0.0037% over the whole Wikipedia. Cache models exploit this simple observatior\nto improve n-gram language models by capturing long-range dependencies in documents. Mor\nprecisely, these models have a cache component, which contains the words that appeared in th\nrecent history (either the document or a fixed number of words). A simple language model, such a\na unigram or smoothed bigram model, is fitted on the words of the cache and interpolated with th\nstatic language model (trained over a larger dataset). This technique has many advantages. First\nthis is a very efficient way to adapt a language model to a new domain. Second, such models cat\npredict out-of-vocabulary words (OOV words), after seeing them once. Finally, this helps captur\nlong-range dependencies in documents, in order to generate more coherent text.\nProcab(W | Lt, +, 01) exp(h; Ow).\nhy = \u00ae (ay, 4-1),\nThe parameters of recurrent neural network language models are learned by minimizing the nega-\ntive log-likelihood of the training data. This objective function is usually minimized by using the\n\nstochastic gradient descent algorithm, or variants such as Adagrad (Duchi et al.|/2011). The gradient\nis computed using the truncated backpropagation through time algorithm (Werbos]|1990\n% Peng} |1990).\n(h1, \u00a32) | (ho, a3) | (hs, 24) -\u2014\u2014> 5\njv | Id \\v O\nhy R >| ho R >| hs R >| ha\n: L rE L\nxy x2 x3 4\nThe Neural Cache Model adds a cache-like memory to neural network language models. It exploits\nthe hidden representations h, to define a probability distribution over the words in the cache. As\nillustrated Figure[]] the cache stores pairs (h;, 7:41) of a hidden representation, and the word which\nwas generated based on this representation (we remind the reader that the vector h; encodes the\nhistory x;, ..., 21). At time t, we then define a probability distribution over words stored in the cache\nbased on the stored hidden representations and the current one h; as\nt-1\nDeache(w | hit, 1.4) \u00ab Ss Lwae0i41} exp(0h/ hi)\n\ni=l\nwhere the scalar is a parameter which controls the flatness of the distribution. When @ is equal\nto zero, the probability distribution over the history is uniform, and our model is equivalent to a\n\nunigram cache model (Kuhn & De Moril{1990)\nFrom the point of view of memory-augmented neural networks, the probabil!\nDeache(w | hit, 1.4) given by the neural cache model can be interpreted as the probabilit:\nto retrieve the word w from the memory given the query h;, where the desired answer is the nex\nword x;+1. Using previous hidden states as keys for the words in the memory, the memory looky\noperator can be implemented with simple dot products between the keys and the query. In contras\nto existing memory-augmented neural networks, the neural cache model avoids the need to learn th\nmemory lookup operator. Such a cache can thus be added to a pre-trained recurrent neural languag\nmodel without fine tuning of the parameters, and large cache size can be used with negligible impac\non the computational cost of a prediction.\nNeural cache language model. Following the standard practice in n-gram cache-based languag\nmodels, the final probability of a word is given by the linear interpolation of the cache languag\nmodel with the regular language model, obtaining:\nPlwl Ait, T1.4) = (1 \u2014 A)pvocar(w | Ne) + APeache(W | h1..t,21..t)\nInstead of taking a linear interpolation between the two distribution with a fixed A, we also consider\na global normalization over the two distribution:\nt-1\np(w | his, V1.4) \u00ab (seta + Ss Lgwaaiss} exp(0h/ hi + \u00ab)\n\ni=l\nThis corresponds to taking a softmax over the vocabulary and the words in the cache. The paramete1\na controls the weight of the cache component, and is the counterpart of the \\ parameter for linear\ninterpolation.\nThe addition of the neural cache to a recurrent neural language model inherits the advantages of n-\ngram caches in usual cache-based models: The probability distribution over words is updated online\ndepending on the context, and out-of-vocabulary words can be predicted as soon as they have been\nseen at least once in the recent history. The neural cache also inherits the ability of the hidden states\nof recurrent neural networks to model longer-term contexts than small n-grams, and thus allows for\na finer modeling of the current context than e.g., unigram caches.\nFigure 1: The neural cache stores\nthe previous hidden states in memory\ncells. They are then used as keys to re-\ntrieve their corresponding word, that\nis the next word. There is no transfor-\nmation applied to the storage during\nwriting and reading.\nHw | hit, \u00a31.24) = (1 \u2014 A)pvocan(w | he) + APeache (w | hit, 21..\nLinear interpolation (ptb) Global normalization (ptb)\n\n0.0\n\n96 0.5 96\n1.0\n\n3 90 2 15 90\n\n&\n\nE s4 OS 58 84\n3.0\n\n78 35 78\n\n0.0 02 04 06 08 1.0 0.0 0.08 0.16 0.240.32 0.4\n\nth atn th atn\nModel Test PPL\nRy a Neeacre (Mikolov & Zwei 90.3\nLSTM alf2079) 78.4\nVariational 734\nRecurrent Highway 66.0\n\nPointer Sentinel LSTM\n\nLSTM (our implem.)\nNeural cache model\n\n70.9\n\n82.3\n721\nTable 1: Test perplexity on the Penn Tree Bank.\nTraining procedure. For now, we first train the (recurrent) neural network language model, with-\nout the cache component. We only apply the cache model at test time, and choose the hyperparam-\neters 6 and X (or a) on the validation set. A big advantage of our method is that it is very easy\nand cheap to apply, with already trained neural models. There is no need to perform backpropaga-\ntion over large contexts, and we can thus apply our method with large cache sizes (larger than one\nthousand)."}, {"section_index": "2", "section_name": "4 RELATED WORK", "section_text": "Cache mode. Aiding a cache to a language model was intoducted in the context of speech recog:\nnition(Kuh] \u00a2|[1989} [Kuhn & De Moril{1990). These models were further extended by\n| To5t ) into a smoothed trigram language model, reporting reduction in both perplexity\nand word error rates. [Della Pietra et al.|(T992) adapt the cache to a general n-gram model such tha\nit satisfies marginal constraints obtained from the current document.\nAdaptive language models. Other adaptive language models have been proposed in the past:\nKneser & Steinbiss| (1993) and {Iyer & Ostendorf] (1999) dynamically adapt the parameters of their\nmodel to the aa sty using different weight interpolation schemes. |Bellegarda| (2000) and\n\n(Coccaro & Jurafsky| use latent semantic analysis to adapt their models to the current context.\nSimilarly, topic Ssh have been used with either maximum entropy models (Khudanpur & Wu\n\noooh or recurrent networks (Mikolov & Zweig} |2012| (2015). Finally, [Lau et al.\ni :\n\n993) proposes to use pairs of distant of words to capture long-range dependencies.\nMemory augmented neural networks. In the context of sequence prediction, several memory\n\naugmented neural networks have obtained promising results (Sukhbaatar et al.|/2015\n2014} Grefenstette et al. 2015} Joulin & Mikolov| 2015). In particular, |Sukhbaatar et al.| (2015)\nthe recent past and accesses it using an attention mechanism\n\nstores a representation of\n\n(2014). |Sukhbaatar et al.|(2015) shows that this reduces the perplexity for language modeling.\nFigure 2: Perplexity on the validation set of Penn Tree Bank for linear interpolation (left) and\nglobal normalization (right), for various values of hyperparameters 6, \\ and a. We use a cache\nmodel of size 500. The base model has a validation perplexity of 86.9. The best linear interpolation\nhas a perplexity of 74.6, while the best global normalization has a perplexity of 74.9.\nLinear interpolation (wikitext2)\n\n005 104\n0.1\n\n0.15 96\n0.2\n\n3 0.\n\n\u00a3 0.25 88\n\n\u2122 03 80\n\n0.35\n0.4\n\n0.0 0.2 04 0.6 0.8 1.0\n\nthata\n\nalpha\n\nGlobal normalization (wikitext2)\n\n00 104\n10 96\n30 88\n30 80\n3.5 2\n\n0.0 0.08 0.16 0.24 0.32 0.4\n\nthata\nTable 2: Test perplexity on the wikitext datasets. The two datasets share the same validation an\ntest sets, making all the results comparable.\nThis approach has been successfully applied to question answering, when the answer is containe\u00ab\n\nin a given rr ya eta 2016} (2015}|Kadlec et al.|[2016} Sukhbaatar et al\nVinyals et al.\n\nSimilarly, (2015) explores the use of this mechanism to reorder sequence\nof tokens. Their network uses an attention (or \u201cpointer\u2019\u201d) over the input sequence to predict whic!\nelement should be selected as the next output. have shown that a simila\nmechanism called pointer softmax could be used in the context of machine translation, to decid\nwhich word to copy from the source to target.\nIndependently of our work, |Merity et al. apply the same mechanism to recurrent network.\nUnlike our work, they uses the current hidden activation as a representation of the current input\n\n(while we use it to represent the output). This requires additional learning of a transformation\nbetween the current representation and those in the past. The advantage of our approach is that we\ncan scale to very large caches effortlessly."}, {"section_index": "3", "section_name": "5 EXPERIMENTS", "section_text": "In this section, we evaluate our method on various language modeling datasets, which have differen\nsizes and characteristics. On all datasets, we train a static recurrent neural network language mode\nwith LSTM units. We then use the hidden representations from this model to obtain our cache, whic!\nis interpolated with the static LSTM model. We also evaluate a unigram cache model interpolate:\nwith the static model as another baseline.\nDatasets. In this section, we describe experiments performed on two s\nTree Bank and the wikitext2\nTree Bank dataset is made of articles from the Wall Street Journal, contains 929k training tokens\nand has a vocabulary size of 10k. The wikitext2 dataset is derived from Wikipedia articles.\ncontains 2M training tokens and has a vocabulary size of 33k. These datasets contain non-shuffled\ndocuments, therefore requiring models to capture inter-sentences dependencies to perform well.\nFigure 3: Perplexity on the validation set of wikitext 2 for linear interpolation (left) and global\nnormalization (right), for various values of hyperparameters 6, \\ and a. We use a cache model of\nsize 2000. The base model has a validation perplexity of 104.2. The best linear interpolation has a\nperplexity of 72.1, while the best global normalization has a perplexity of 73.5.\nModel\n\nwikitext2 wikitext103\nZoneout + Variational LSTM (Merity et al.| 2 100.9 -\nPointer Sentinel LSTM (Merity et al.|/2016) 80.8 -\nLSTM (our implementation) 99.3 48.7\nNeural cache model (size = 100) 81.6 44.8\nNeural cache model (size = 2,000) 68.9 40.8\n125\n\nperplexity\nS\n\ntext8\n\no. oO\n9 \u00b0\u00b0\n\u00b0\nx 9 0 \u2014 baseline\nx \u00a9 unigram\nx x x neural\nXx TTXHTx\n10? 10\u00b0 104\n\ncache size (log scale)\n\nwikitext 103\n\nperplexity\n\n\u00b0\n\nfe) o 59 0 0 fe)\n\u2014 baseline\nx \u00a9 unigram\nx neural\n\nx\nx\n\nx x x\n10? 10\u00b0 104\n\ncache size (log scale)\nFigure 4: Test perplexity as a function of the number of words in the cache, for our method and a\nunigram cache baseline. We observe that our approach can uses larger caches than the baseline.\nImplementation details. We train recurrent neural network language models with 1024 LSTM\nunits, regularized with dropout (probability of dropping out units equals to 0.65). We use the Ada-\ngrad algorithm, with a learning rate of 0.2, a batchsize of 20 and initial weight uniformly sampled in\nthe range [\u20140.05, 0.05]. We clip the norm of the gradient to 0.1 and unroll the network for 30 steps.\nWe consider cache sizes on a logarithmic scale, from 50 to 10, 000, and fit the cache hyperparameters\non the validation set."}, {"section_index": "4", "section_name": "5.2 MEDIUM SCALE EXPERIMENTS", "section_text": "Datasets and implementation details. In this section, we describe experiments performed over\ntwo medium scale datasets: text 8 and wikitext103. Both datasets are derived from Wikipedia,\nbut different pre-processing were applied. The text 8 dataset contains 17M training tokens and\nhas a vocabulary size of 44k words, while the wikitext103 dataset has a training set of size\n103M, and a vocabulary size of 267k words. We use the same setting as in the previous section,\nexcept for the batchsize (we use 128) and dropout parameters (we use 0.45 for text 8 and 0.25 for\n\nwikitext103). Since both datasets have large vocabularies, we use the adaptive softmax\neral 9016) for facter training\nResults. We report the test perplexity as a function of the cache size in Figure] for the neur\u00e9\ncache model and a unigram cache baseline. We observe that our approach can exploits larger cach\nsizes, compared to the baseline. In Table |2} we observe that the improvement in perplexity c\nour method over the LSTM baseline on wikitext103 is smaller than for wikitext2 (appro\u00bb\n16% v.s. 30%). The fact that improvements obtained with more advanced techniq\nwhen the size of training data increases has already been observed by\nwikitext datasets sharing the same test set, we also observe that the LSTM baseline, traine\non 103M tokens (wikitext103), strongly outperforms more sophisticated methods, trained o\n2M tokens (wikitext2). For these two reasons, we believe that it is important to evaluate an\ncompare methods on relatively large datasets.\nResults. We report the perplexity on the validation sets in Figures 2j and for various values\nof hyperparameters, for linear interpolation and global normalization. First, we observe that on\nooth datasets, the linear interpolation method performs slightly better than the global normalization\nipproach. It is also easier to apply in practice, and we thus use this method in the remainder of this\npaper. In Tables [I] and [2] we report the test perplexity of our approach and state-of-the-art models.\nOur approach is competitive with previous models, in particular with the pointer sentinel LSTM\nmodel of Merity et al.|(2016). OnPenn Tree Bank, we note that the improvement over the base\nmodel is similar for both methods. On the wikitext2 dataset, both methods obtain similar results\nwhen using the same cache size (100 words). Since our method is computationally cheap, it is easy\n\u2018o increase the cache to larger values (2,000 words), leading to dramatic improvements (30% over\nhe baseline, 12% over a small cache of 100 words).\nTable 3: Perplexity on the text 8 and lambada datasets. WBS stands for 5-gram language model\nwith Witten-Bell smoothing.\nlambada\n\n700\n\n600|| X Control\n\n5s, 500|} O Development\n\n*% 400\n\n\"B 300 \u00b0 x\n\no\n\n=. 200 fe) 0 x x\n\n100) x x %\u00a5 2 8 6 0 0 0\n0\n0.0 0.2 0.4 0.6 0.8\n\nlambda\n\n1.0\nFigure 5: Perplexity on the development and control sets of lambada, as a function of the interpo-\nlation parameters \\.\nFinally, we report experiments carried on the Lambada dataset, introduced by|Paperno et al.|(2016).\nThis is a dataset of short passages extracted from novels. The goal is to predict the last word of the\n\nexcerpt. This dataset was built so that human subjects solve the task perfectly when given the full\ncontext (approx. 4.6 sentences), but fail to do so when only given the sentence with the target word.\nThus, most state-of-the-art language models fail on this dataset. The lambada training set contains\napproximately 200M tokens and has a vocabulary size of 93,215. We report results for our method\nin TableB3} as well the performance of baselines from Paperno et al (2016). Adding a neural cache\nmodel to the LSTM baseline strongly improves the performance on the lambada dataset. We also\nbserve in Figure [5] that the best interpolation parameter between the static model and the cache\ns not the same for the development and control sets. This is due to the fact that more than 83%\nof passages of the development set include the target word, while this is true for only 14% of the\ncontrol set. Ideally, a model should have strong results on both sets. One possible generalization of\nuur model would be to adapt the interpolation parameter based on the current vector representation\nthe history hy.\n\nao)\n\nfe)\n0)"}, {"section_index": "5", "section_name": "6 CONCLUSION", "section_text": "We presented the neural cache model to augment neural language models with a longer-term mem\nory that dynamically updates the word probablilities based on the long-term context. A neural cache\ncan be added on top of a pre-trained language model at negligible cost. Our experiments on both lan\nguage modeling tasks and the challenging LAMBADA dataset shows that significant performance\ngains can be expected by adding this external memory component.\nTechnically, the neural cache models is similar to some recent memory-augmented neural network:\nsuch as pointer networks. However, its specific design makes it possible to avoid learning the mem:\nory lookup component. This makes the neural cache appealing since it can use larger cache size:\nthan memory-augment networks and can be applied as easily as traditional count-based caches."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning t\u00ab\nalign and translate. arXiv preprint arXiv: 1409.0473, 2014.\nYoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language\nmodel. JMLR, 2003.\nPeter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. The mathematics\nstatistical machine translation: Parameter estimation. Computational linguistics, 1993.\nDanqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail reading\ncomprehension task. arXiv preprint arXiv: 1606.02858, 2016.\nunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated r\ncurrent neural networks on sequence modeling. arXiv preprint arXiv: 1412.3555, 2014.\nNoah Coccaro and Daniel Jurafsky. Towards better integration of semantic predictors in statistical languag:\nmodeling. In JCSLP. Citeseer, 1998.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic\noptimization. JMLR, 2011.\nJeffrey L Elman. Finding structure in time. Cognitive science, 1990.\nYarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural net-\nworks. arXiv preprint arXiv: 1512.05287, 2015.\nJoshua T Goodman. A bit of progress in language modeling. Computer Speech & Language, 2001.\nAlex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv: 1410.5401, 2014.\nSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 1997.\nRukmini M Iyer and Mari Ostendorf. Modeling long distance dependence in language: Topic mixtures versus\ndynamic cache models. [EEE Transactions on speech and audio processing, 1999.\nFrederick Jelinek, Bernard Merialdo, Salim Roukos, and Martin Strauss. A dynamic language model for speec!\nrecognition. In HLT, 1991.\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In\nAdvances in Neural Information Processing Systems. pp. 190-198, 2015.\nLalit R Bahl, Frederick Jelinek, and Robert L Mercer. A maximum likelihood approach to continuous speech\nrecognition. PAMI, 1983.\nJerome R Bellegarda. Exploiting latent semantic information in statistical language modeling. Proceedings of\nthe IEEE, 2000.\nEdouard Grave, Armand Joulin, Moustapha Ciss\u00e9, David Grangier, and Herv\u00e9 J\u00e9gou. Efficient softmax ap-\nproximation for gpus. arXiv preprint arXiv: 1609.04309, 2016.\nidward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with\nunbounded memory. In Advances in Neural Information Processing Systems, pp. 1828-1836, 2015.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman,\nand Phil Blunsom. Teaching machines to read and comprehend. In NIJPS, 2015.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of\nlanguage modeling. arXiv preprint arXiv: 1602.02410, 2016.\nSanjeev Khudanpur and Jun Wu. Maximum entropy techniques for exploiting syntactic, semantic and colloca\ntional dependencies in language modeling. Computer Speech & Language, 2000.\nReinhard Kneser and Hermann Ney. Improved backing-off for m-gram language modeling. In JCASSP, 1995.\nRoland Kuhn. Speech recognition and the frequency of recently used words: A modified markov model fo\nnatural language. In Proceedings of the 12th conference on Computational linguistics-Volume 1, 1988.\nMitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus o:\nenglish: The penn treebank. Computational linguistics, 1993.\nfomas Mikolov, Martin Karafidt, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neura\nnetwork based language model. In INTERSPEECH, 2010.\nDenis Paperno, German Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro\nPezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. The lambada dataset: Word prediction\nrequiring a broad discourse context. arXiv preprint arXiv: 1606.06031. 2016.\nRonald Rosenfeld. A maximum entropy approach to adaptive statistical language modeling. Computer, Speech\nand Language, 1996.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NJPS, 2015.\nPaul J Werbos. Backpropagation through time: what it does and how to do it. 1990.\nJulian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnik, and Jiirgen Schmidhuber. Recurrent highway\nnetworks. arXiv preprint arXiv: 1607.03474, 2016.\nRudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum\nreader network. arXiv preprint arXiv: 1603.01547, 2016.\nSlava M Katz. Estimation of probabilities from sparse data for the language model component of a speech\nrecognizer. ICASSP, 1987.\nReinhard Kneser and Volker Steinbiss. On the dynamic adaptation of stochastic language models. In ICASSP,\nRoland Kuhn and Renato De Mori. A cache-based natural language model for speech recognition. PAMI, 1990.\nRaymond Lau, Ronald Rosenfeld, and Salim Roukos. Trigger-based language models: A maximum entropy\napproach. In JCASSP, 1993.\nstephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXi\npreprint arXiv: 1609.07843, 2016.\nTomas Mikolov, Anoop Deoras, Stefan Kombrink, Lukas Burget, and Jan Cernocky. Empirical evaluation and\ncombination of advanced language modeling techniques. In INTERSPEECH, 2011.\nAndreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Elizabeth\nShriberg, Daniel Jurafsky, Rachel Martin, and Marie Meteer. Dialogue act modeling for automatic tagging\nand recognition of conversational speech. Computational linguistics, 2000.\nsainbayar Sukhbaatar, Szlam Arthur, Jason Weston, and Rob Fergus. End-to-end memory networks. In NJPS\n2015.\nRonald J Williams and Jing Peng. An efficient gradient-based algorithm for on-line training of recurrent net-\nwork trajectories. Neural computation, 1990.\nVojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint\narXiv: 1409.2329, 2014."}]
ryHlUtqge
[{"section_index": "0", "section_name": "GENERALIZING SKILLS WITH SEMI-SUPERVISED\nREINFORCEMENT LEARNING", "section_text": "Chelsea Finn+, Tianhe Yu, Justin Fu}, Pieter Abbeel++, Sergey Levine+\n\n+ Berkeley AI Research (BAIR), University of California, Berkeley\n\n+ OpenAI\n\nfcobfinn, tianhe.vu, iustinfu, pabbeel, svlevinelt@berkeley.\nnn,tianhe.yu, justinfu, pabbeel, svlevine}@berkeley edu\nDeep reinforcement learning (RL) can acquire complex behaviors from low-level\ninputs, such as images. However, real-world applications of such methods require\ngeneralizing to the vast variability of the real world. Deep networks are known\nto achieve remarkable generalization when provided with massive amounts of la-\nbeled data, but can we provide this breadth of experience to an RL agent, such as a\nrobot? The robot might continuously learn as it explores the world around it, even\nwhile it is deployed and performing useful tasks. However, this learning requires\naccess to a reward function, to tell the agent whether it is succeeding or failing at\nits task. Such reward functions are often hard to measure in the real world, es-\npecially in domains such as robotics and dialog systems, where the reward could\ndepend on the unknown positions of objects or the emotional state of the user. On\nthe other hand, it is often quite practical to provide the agent with reward func-\ntions in a limited set of situations, such as when a human supervisor is present,\nor in a controlled laboratory setting. Can we make use of this limited supervi-\nsion, and still benefit from the breadth of experience an agent might collect in the\nunstructured real world? In this paper, we formalize this problem setting as semi-\nsupervised reinforcement learning (SSRL), where the reward function can only be\nevaluated in a set of \u201clabeled\u201d MDPs, and the agent must generalize its behavior\nto the wide range of states it might encounter in a set of \u201cunlabeled\u201d MDPs, by\nusing experience from both settings. Our proposed method infers the task objec-\ntive in the unlabeled MDPs through an algorithm that resembles inverse RL, using\nthe agent\u2019s own prior experience in the labeled MDPs as a kind of demonstration\nof optimal behavior. We evaluate our method on challenging, continuous control\ntasks that require control directly from images, and show that our approach can\nimprove the generalization of a learned deep neural network policy by using ex-\nperience for which no reward function is available. We also show that our method\noutperforms direct supervised learning of the reward."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Reinforcement learning (RL) provides a powerful framework for learning behavior from high-\nlevel goals. RL has been combined with deep networks to learn policies for problems such as\n\nAtari games (Mnih et al.| , simple Minecraft tasks (Oh et al.||2016), and simulated locomo-\ntion (Schulman et al.|/2015). To apply reinforcement learning (RL) to real-world scenarios, however,\n\nthe learned policy must be able to handle the variability of the real-world and generalize to scenarios\nthat it has not seen previously. In many such domains, such as robotics and dialog systems, the vari-\nability of the real-world poses a significant challenge. Methods for training deep, flexible models\ncombined with massive amounts of labeled data are known to enable wide generalization for super-\n\nvised learning tasks (Russakovsky et al.|[2015). Lifelong learning aims to address this data challenge\n\nin the context of RL by enabling the agent to continuously learn as it collects new experiences \u201con\nthe job,\u201d directly in the real world (Thrun & Mitchell} |1995). However, this learning requires access\nto a reward function, to tell the agent whether it is succeeding or failing at its task. Although the\nreward is a high-level supervision signal that is in principle easier to provide than detailed labels.\nin practice it often depends on information that is extrinsic to the agent and is therefore difficult\nto measure in the real world. For example, in robotics, the reward may depend on the poses of all"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "reward function\navailable\n\n(a)\n\nreward function\nunavailable\n\ntraining evaluation\nRL MEL MEU\ntransfer | MeL, M \u20acU with reward MeEU\nSSRL MeL, M \u20acU no reward MeU\n\n(b)\na A. dt \u00b0 training evaluation\n\n7 _ 1 RL MEL MEU\n| transfer | MeL, M \u20acU with reward MeEU\nreward function reward function SSRL MEL, M \u20acU no reward MEU\n\navailable oo unavailable\nFigure 1: We consider the problem of semi-supervised reinforcement learning, where a rewarc\nfunction can be evaluated in some small set of labeled MDPs M \u20ac L, but the resulting policy must\nbe successful on a larger set of unlabeled MDPs M \u20ac L for which the reward function is not known.\nIn standard RL, the policy is trained only on the labeled MDPs, while in transfer learning, the policy\nis finetuned using a known reward function in the unlabeled MDP set. Semi-supervised RL is distinc\nin that it involves using experience from the unlabeled set without access to the reward function.\nof the objects in the environment, and in dialog systems, the reward may depend on the happines:\nof the user. This reward supervision is practical to measure in a small set of instrumented trainin;\nscenarios, in laboratory settings, or under the guidance of a human teacher, but quickly become\nimpractical to provide continuously to a lifelong learning system, when the agent is deployed i\nvaried and diverse real-world settings.\nConceptually, we might imagine that this challenge should not exist, since reinforcement learnin;\nshould, at least in principle, be able to handle high-level delayed rewards that can always be mea\nsured. For example, a human or animal might have their reward encode some higher-level intrinsi\ngoals such as survival, reproduction, or the absence of pain and hunger. However, most RL method:\ndo not operate at the level of such extremely sparse and high-level rewards, and most of the suc\ncesses of RL have been in domains with natural sources of detailed external feedback, such as th\nscore in a video game. In most real-world scenarios, such a natural and convenient score typically\ndoes not exist. It therefore seems that intelligent agents in the real world should be able to cope with\nonly partial reward supervision, and that algorithms that enable this are of both of practical and con\nceptual value, since they bring us closer to real-world lifelong reinforcement learning, and can hel}\nus understand adaptive intelligent systems that can learn even under limited supervisory feedback\nSo how can an agent continue to learn in the real world without access to a reward function?\nIn this work, we formalize this as the problem of semi-supervised reinforcement learning, where the\nagent must perform RL when the reward function is known in some settings, but cannot be evaluatec\nin others. As illustrated in Figure|1} we assume that the agent can first learn in a small range o!\n\u201clabeled\u201d scenarios, where the reward is available, and then experiences a wider range of \u201cunlabeled\u2019\nscenarios where it must learn to act successfully, akin to lifelong learning in the real world. Thi:\nproblem statement can be viewed as being analogous to the problem of semi-supervised learning, bu\nwith the additional complexity of sequential decision making. Standard approaches to RL simply\nlearn a policy in the scenarios where a reward function is available, and hope that it generalizes t\u00a2\nnew unseen conditions. However, it should be possible to leverage unlabeled experiences to find\nmore general policy. and to achieve continuous improvement from lifelong real-world experience.\nOur main contribution is to propose and evaluate the first algorithm for performing semi-supervised\nreinforcement learning, which we call semi-supervised skill generalization (S3G). Our approach car\nleverage unlabeled experience to learn a policy that can succeed in a wider variety of scenarios thar\na policy trained only with labeled experiences. In our method, we train an RL policy in settings\nwhere a reward function is available, and then run an algorithm that resembles inverse reinforce-\nment learning, to simultaneously learn a reward and a more general policy in the wider range of\nunlabeled settings. Unlike traditional applications of inverse RL algorithms, we use roll-outs from\nthe RL policy in the labeled conditions as demonstrations, rather than a human expert, making ou\nmethod completely autonomous. Although our approach is compatible with any choice of rein-\nforcement learning and inverse reinforcement learning algorithm, we use the guided cost learning\nmethod in our experimental evaluation, which allows us to evaluate on high-dimensional, continu-\nous robotic manipulation tasks with unknown dynamics while using a relatively modest number of\nsamples (Finn et al.|[2016). We compare our method to two baselines: (a) a policy trained with RL\nin settings where reward labels are available (as is standard), and (b) a policy trained in the unlabelec"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Utilizing both labeled and unlabeled data is a well-known technique that can improve learning per-\nformance when data is limited (2009). These techniques are especially important\nin domains where large, supervised datasets are difficult to acquire, but unlabeled data is plentiful\nThis problem is generally known as semi-supervised learning. Methods for solving this problem\noften include propagating known labels to the unlabeled examples (Zhu & Ghahramanil {2002) anc\nusing regularizing side information (Szummer & Jaakkolal 2002) such as the structure of the data\nSemi-supervised learning has been performed with deep models, either by blending unsupervised\nand supervised objectives (Rasmus et al.| 2016} Zhang et al. 2016) or by using generative models.\nwith the labels treated as missing data (Kingma et al.|/20 14). Semi-supervised learning is particularly\nrelevant in robotics and control, where collecting labeled experience on real hardware is expensive\nHowever, while semi-supervised learning has been successful in domains such as object tracking\n\nand detection (Teichman & Thrun}|2007), applications to action and control have not been appliec\nto the objective of the task itself.\nThe generalization capabilities of policies learned through RL (and deep RL) has been limited, as\npointed out by Oh et al. {Oh et al.| (2016). That is, typically the settings under which the agent is\ntested do not vary from those under which it was trained. We develop a method for generalizing\nskills to a wider range of settings using unlabeled experience. A related but orthogonal problem is\ntransfer learning (Taylor & Stone} 2009} [Barrett et al. 2010), which attempts to use prior experience\nin one domain to improve training performance in another. Transfer learning has been applied to\nRL domains for transferring information across environments (Mordatch et al.|\n\n2016), robots (Devin et al.| 2016), and tasks (Konidaris & Barto] |2006}|Stolle & Atkeson| |2007\nDragan et al.| |201T] 2016} (2016). The goal of these approaches is\n\ntypically to utilize experience in a source domain to learn faster or better in the target domain. Unlike\nmost transfer learning scenarios, we assume that supervision cannot be obtained in many scenarios.\nWe are also not concerned with large, systematic domain shift: we assume that the labeled and\nunlabeled settings come from the same underlying distribution. Note, however, that the method that\nwe develop could be used for transfer learning problems where the state and reward are consistent\nacross domains.\nTo the best of our knowledge, this paper is the first to provide a practical and tractable algorithm fo\nsemi-supervised RL with large, expressive function approximators, and illustrate that such learnin;\nactually improves the generalization of the learned policy. However, the idea of semi-supervise:\nreinforcement learning procedures has been previously discussed as a compelling research directior\n\nby Christiano] (2016) and{Amodei et al.|(2016).\nTo accomplish semi-supervised reinforcement learning, we propose a method that resembles an in-\nverse reinforcement learning (IRL) algorithm, in that it imputes the reward function in the unlabeled\nsettings by learning from the successful trials in the labeled settings. IRL was first introduced by|Ng\net al.|(2000) as the problem of learning reward functions from expert, human demonstrations, typ-\nically with the end goal of learning a policy that can succeed from states that are not in the set of\ndemonstrations 2 We use IRL to infer the reward function underlying a policy\npreviously learned in a small set of labeled scenarios, rather than using expert demonstrations. We\nbuild upon prior methods, including guided cost learning, which propose to learn a cost and a policy\n\nsimultaneously (Finn et al] (2016). Note that the problem that we are consider-\ning is distinct from semi-supervised inverse reinforcement learning {Audiffren et al. (2015), which\n\nmakes use of expert and non-expert trajectories for learning. We require a reward function in some\ninstances, rather than expert demonstrations."}, {"section_index": "4", "section_name": "3. SEMI-SUPERVISED REINFORCEMENT LEARNING", "section_text": "We first define semi-supervised reinforcement learning. We would like the problem definition to be\nable to capture situations where supervision, via the reward function, is only available in a small set\nsettings using a reward function trained to regress to available reward labels. We find that S3G re-\ncovers a policy that is substantially more effective than the prior, standard approach in a wide variety\nof settings, without using any additional labeled information. We also find that, by using an inverse\nRL objective, our method achieves superior generalization to the reward regression approach.\nof labeled Markov decision processes (MDPs), but where we want our agent to be able to continu\nto learn to perform successfully in a much larger set of unlabeled MDPs, where reward labels ar\nunavailable. For example, if the task corresponds to an autonomous car learning to drive, the labele:\nMDPs might correspond to a range of closed courses, while the unlabeled MDPs might involv\ndriving on real-world highways and city streets. We use the terms labeled and unlabeled in analog\u2019\nto semi-supervised learning, but note a reward observation is not as directly informative as a label.\nFormally, we consider a distribution p(M) over undiscounted finite-horizon MDPs, each defined as\na 4-tuple M; = (S,A,T,R) over states, actions, transition dynamics (which are generally unknown).\nand reward. The states and actions may be continuous or discrete, and the reward function R is\nassumed to the same across MDPs in the distribution p(M). Let L and U denote two sets of MDP:\nsampled from the distribution p(M). Experience may be collected in both sets of MDPs, but the\nreward can only be evaluated in the set of labeled MDPs L. The objective is to find a policy 7* that\nmaximizes expected reward in the distribution over MDPs:\nE 00M) |\nax Ey\nw= argm.\nwhere H denotes the horizon. Note that the notion of finding a policy that succeeds on a distribution\nof MDPs is very natural in many real-world reinforcement learning problems. For example, in the\nearlier autonomous driving example, our goal is not to find a policy that succeeds on one particular\nroad or in one particular city, but on all roads that the car might encounter. Note that the problem\ncan also be formalized in terms of a single large MDP with a large diversity of initial states, but\nviewing the expectation as being over a distribution of MDPs provides a more natural analogue with\nsemi-supervised learning, as we discuss below.\nIn standard semi-supervised learning, it is assumed that the data distribution is the same across\nboth labeled and unlabeled examples, and the amount of labeled data is limited. Similarly, semi-\nsupervised reinforcement learning assumes that the labeled and unlabeled MDPs are sampled from\nthe same distribution. In SSRL, however, it is the set of labeled MDPs that is limited, whereas ac-\nquiring large amounts of experience within the set of labeled MDPs is permissible, though unlimited\nexperience in the labeled MDPs is not sufficient on its own for good performance on the entire MDP\ndistribution. This is motivated by real-world lifelong learning, where an agent (e.g. a robot) may\nbe initially trained with detailed reward information in a small set of scenarios (e.g. with a human\nteacher), and is then deployed into a much larger set of scenarios, without reward labels. One natural\nquestion is how much variation can exist in the distribution over MDPs. We empirically answer this\nquestion in our experimental evaluation in Section|5}"}, {"section_index": "5", "section_name": "+ SEMI-SUPERVISED SKILL GENERALIZATION", "section_text": "We now present our approach for performing semi-supervised reinforcement learning for generaliz-\ning previously learned skills. As discussed previously, our goal is to learn a policy that maximizes\nexpected reward in M \u20ac U, using both unlabeled experience in U and labeled experience in L. We\nThe standard paradigm in reinforcement learning is to learn a policy in the labeled MDPs and apply it\ndirectly to new MDPs from the same distribution, hoping that the original policy will generalize (Oh\not al. 6). An alternative approach is to train a reward function with supervised learning to regress\nfrom the agent\u2019s observations to the reward labels, and then use this reward function for learning\nin the unlabeled settings. In our experiments, we find that this approach is often more effective\nbecause, unlike the policy, the reward function is decoupled from the rest of the MDP, and can thus\nyeneralize more readily. The agent can then continue to learn from unlabeled experiences using\nthe learned reward function. However, because the state distributions in the two sets of MDPs may\nbe different, a function approximator trained on the reward function in the labeled MDPs may not\nnecessarily generalize well to the unlabeled one, due to the domain shift. A more effective solution\nwould be to incorporate the unlabeled experience sampled from U when learning the reward. Unlike\ntypical semi-supervised learning, the goal is not to learn the reward labels per se, but to learn a policy\nthat optimizes the reward. By incorporating both labeled and unlabeled experience, we can develop\nan algorithm that alternates between inferring the reward function and updating the policy, which\neffectively provides a shaping, or curriculum, for learning to perform well in the unlabeled settings.\nIn the following section, we discuss our proposed algorithm in detail.\nwill use the formalism adopted in the previous section; however, note that performing RL in a set of\nMDPs can be equivalently be viewed as a single MDP with a large diversity of initial conditions.\nIn order to perform semi-supervised reinforcement learning, we use the framework of maximum en.\n\ntropy control (Ziebart| /2010} (2012), also called linear-solvable MDPs (Dvijotham &\nTodorov| 2010). This framework is a generalization of the standard reinforcement learning formula:\n\ntion, where instead of optimizing the expected reward, we optimize an entropy-regularized objective\nof the form\nTo see that this is a generalization of the standard RL setting, observe that, as the magnitude o\nthe reward increases, the relative weight on the entropy regularizer decreases, so the classic RL\nobjective can be recovered by putting a temperature 6 on the reward, and taking the limit as B > \u00a9\nFor finite rewards, this objective encourages policies to take random actions when all options have\nroughly equal value. Under the optimal policy ap_, samples with the highest reward R have the\nhighest likelihood, and the likelihood decreases exponentially with decrease in reward. In our work\nthis framework helps to produce policies in the labeled MDP that are diverse, and therefore bette\nsuited for inferring reward functions that transfer effectively to the unlabeled MDP.\nAfter training 7pL, we generate a set of samples from 7g, in L, which we denote as Dz,,. The\nobjective of S3G is to use D,,,. to find a policy that maximizes expected reward in U,\nwhere the reward R is not available. By using the agent\u2019s prior experience Dz, , as well as unlabelec\nexperience in U, we aim to learn a well-shaped reward function to facilitate learning in U. To dc\nso, S3G simultaneously learns a reward function Ry with parameters @ and optimizes a policy 7\u00a2\nwith parameters 6 in the unlabeled MDP U. This consists of iteratively taking samples Dz, fron\nthe current policy 29 in U, updating the reward Ro, and updating the policy 7 using reward value:\nimputed using Ry. At the end of the procedure, we end up with a policy % optimized in U. As\nshown in prior work, this procedure corresponds to an inverse reinforcement learning algorithm tha\n\nconverges to a policy that matches the performance observed in Dzp, 2016). We nex\ngo over the objectives used for updating the reward and the policy.\nReward update: Because of the entropy regularized objective in Equation|I] it follows that th\nsamples Dz,, are generated from the following maximum entropy distribution (Ziebart]/2010):\nwhere T denotes a single trajectory sample {so,a0,51,a1,...,87} and R(t) = Y, R(s;,a;). Thus, the\nobjective of the reward optimization phase is to maximize the log likelihood of the agent\u2019s prior\nexperience Dz,, under this exponential model. The computational challenge here is to estimate\nthe partition function Z which is intractable to compute in high-dimensional spaces. We thus use\nimportance sampling, using samples to estimate the partition function Z as follows:\nL(O)= YL Ro(z)\u2014logz ~ YL Ro(z)-log YF explo (2)\n\nt~Dapy t~Drgy, t~Dyamp q(t)\nwhere D,amp is the set of samples used for estimating the partition function Z and q(t) is the prob\nability of sampling Tt under the policy it was generated from. Note that the distribution of this se\nof samples is crucial for effectively estimating Z. The optimal distribution for importance samplin;\nis the one that is proportional to g(t) \u00ab |exp(Ry())| = exp(Ro(t)). Conveniently, this is also th\noptimal behavior when the reward function is fully optimized such that Ry = R. Thus, we adaptivel:\nupdate the policy to minimize the KL-divergence between its own distribution and the distributio:\ninduced by the current reward, Ry (t), and use samples from the policy to estimate the partition func\ntion. Since the importance sampling estimate of Z will be high variance at the beginning of trainin\nwhen fewer policy samples have been collected, we also use the samples from the RL policy 7g,\nThus we set Damp to be {Dz, U Dap, }-\nAH\n\nLR St, \u00ab]- H(2).\n\nMRL = argmax Ex, Mer\nau\nrisa) \u2014H(me),\n\nmax Ex, acu\n6\np(t) = 5 exp(R(7)),\nAlgorithm 1 Semi-Supervised Skill Generalization\na\n\ninputs: Set of unlabeled MDPs U; reward R for labeled MDPs M \u20ac L\nOptimize 7p, to maximize Rin M \u20acL\nGenerate samples Dz, from ty in M \u20ac L\nInitialize Damp \u2014 Dig,\nfor iteration i = 1 to / do\nRun 7% in M \u20ac U to generate samples Dz,\nAppend samples Dsamp <- Dsamp U Pay\nUpdate reward Ro according to Equatio\nUpdate policy 2 according to Equation\nend for\n\ning Drea. and Dsamp\nsing Ry and Dz,\n\n: return generalized policy 7g\nWe parameterize the reward using a neural network, and update it using mini-batch stochastic gra-\ndient descent, by backpropagating the gradient of the Equation]3]to the parameters of the reward.\nPolicy update: Our goal with the policy is two-fold. First, we of course need a policy that succeeds\nin MDPs M \u20ac U. But since the reward in these MDPs is unavailable, the policy must also serve\nto generate samples for more accurately estimating the partition function in Equation |2} so tha\nthe reward update step can improve the accuracy of the estimated reward function. The policy\noptimization objective to achieve both of these is to maximize the expected reward Ro, augmentec\nwith an entropy term as before:\nL\u00a3(0) = Eexy,.McU\n\naz\n\na) \u2014H (te)\nWhile we could in principle use any policy optimization method in this step, our prototype uses\nmirror descent guided policy search (MDGPS), a sample-efficient policy optimization method suit-\nable for training complex neural network policies that has been validated on real-world physical\nrobots Se eo We interleave reward function\nupdates using the objective in Equation|3| within the policy optimization method. We describe the\npolicy optimization procedure in detail in AppendixJA]\nThe full algorithm is presented in Algorithm[T] Note that this iterative procedure of comparing the\ncurrent policy to the optimal behavior provides a form of shaping or curriculum to learning. Ou\nmethod is structured similarly to the recently proposed guided cost learning method\n(2016), and inherits its convergence properties and theoretical foundations. Guided cost learning is\nan inverse RL algorithm that interleaves policy learning and reward learning directly in the target\ndomain, which in our case is the unlabeled MDPs. Unlike guided cost learning, however, the cost\n(or reward) is not inferred from expert human-provided demonstrations, but from the agent\u2019s own\nprior experience in the labeled MDPs.\nSince the aim of S3G is to improve the generalization performance of a learned policy by leveragin\ndata from the unlabeled MDPs, our experiments focus on domains where generalization is critical fc\nsuccess. Despite the focus on generalization in many machine learning problems, the generalizatio\ncapabilities of policies trained with RL have frequently been overlooked. For example, in rece!\nRL benchmarks such as the Arcade Learning Environment (Bellemare et al.| 2012) and OpenA\nGym 6), the training conditions perfectly match the testing conditions. Thu\nwe define our own set of simulated control tasks for this paper, explicitly considering the types \u00ab\nvariation that a robot might encounter in the real world. Through our evaluation, we seek to measut\nhow well semi-supervised methods can leverage unlabeled experiences to improve the generalizatic\nof a deep neural network policy learned only in only labeled scenarios.\nCode for reproducing the simulated experiments is available onling\u2019} Videos of the learned policies\ncan be viewed at sites.google.com/site/semisupervisedrl\nFigure 2: Illustrations of the tasks. For the reacher with vision, the range of the target for the labeled\nMDPs is shown with a red dotted line, and for the unlabeled MDPs with a green dashed line. For\nthe obstacle and cheetah tasks, we show the highest obstacle height.\nEach of the tasks are modeled using the MuJoCo simulator, and involve continuous state and action\nspaces with unknown dynamics. The task difficulty ranges from simple, low-dimensional problems\nto tasks with complex dynamics and high-dimensional observations. In each experiment, the reward\nfunction is available in some settings but not others, and the unlabeled MDPs generally involve a\nwider variety of conditions. We visualize the tasks in Figure|2]and describe them in detail below:\nobstacle navigation / obstacle height: The goal of this task is to navigate a point robot around an\nobstacle to a goal position in 2D. The observation is the robot\u2019s position and velocity, and does not\n\ninclude the height of the obstacle. The height of the obstacle is 0.2 in the labeled MDP, and 0.5 in\nthe unlabeled MDP.\n2-link reacher with vision / target position: The task objective is the same as the 2-link reacher\nexcept, in this task, the MDPs involve a wide 2D range of target positions, shown in Figure[2] nsteac\nof passing in the coordinate of the target position, the policy and the reward function receive a raw\n\n64 x 80 RGB image of the environment at the first time step.\nhalf-cheetah jump / wall height: In this task, the goal is for a simulated 6-DOF cheetah-lik\u00ab\nrobot with to jump over a wall, with 10% gravity. The observation is the robot\u2019s joint angles, globa\npose, and their velocities, for a total dimension of 20. The unlabeled MDP involves jumping ove\na 0.5 meter wall, compared to the labeled MDP with a 0.2 meter wall. Success is measured basec\non whether or not the cheetah fully clears the wall. Policies for reward regression, S3G, and oracl.\nwere initialized from the RL policy.\nIn all tasks, the continuous action vector corresponds to the torques or forces applied to each o!\nthe robot\u2019s joints. For the first three tasks, reaching the goal position within 5 cm is considered <\nsuccess. For the non-visual tasks, the policy was represented using a neural network with 2 hidder\nlayers of 40 units each. The vision task used 3 convolutional layers with 15 filters of size 5 x 5 each\nfollowed by the spatial feature point transformation proposed by [Levine et al.| (2016), and lastly\nfully-connected layers of 20 units each. The reward function architecture mirrored the architecture\nas the policy, but using a quadratic norm on the output, as done by|Finn et al.|(2016)."}, {"section_index": "6", "section_name": "5.2 EVALUATION", "section_text": "In our evaluation, we compare the performance of S3G to that of (i) the RL policy Zr, trained only\nin the labeled MDPs, (ii) a policy learned using a reward function fitted with supervised learning,\nand (iii) an oracle policy which can access the true reward function in all scenarios. The architecture\nof the reward function fitted with supervised learning is the same as that used in S3G.\n2-link reacher /mass: This task involves moving the end-effector of a two-link reacher to a spec-\nified goal position. The observation is the robot\u2019s joint angles, end-effector pose, and their time-\nderivatives. In the labeled MDPs, the mass of the arm varies between 7 x 10~\u00b0 and 7 x 10!, whereas\nthe unlabeled MDPs involve a range of 7 x 107? to 7 x 10\u00b0.\nTo extensively test the generalization capabilities of the policies learned with each method, we mea-\nsure performance on a wide range of settings that is a superset of the unlabeled and labeled MDPs,\nas indicated in Figure[3] We report the success rate of policies learned with each method in Table[1]\nTable 1: The success rate of each method with respect to generalization. The table compares the\nstandard RL policy (which is trained only on the labeled MDPs), with both the supervised regressior\nmethod and S3G. Both of the latter use the unlabeled regime for additional training, though only S3C\nalso uses the unlabeled data to improve the learned reward function.\nSuccess Rate\n\nRL policy _ reward regression (ours) _S3G (ours) | oracle\n\nobstacle 65% 29% 79% 36%\n-_ 2-linkreacher T% 60% ~ = O8% | 80%\n~ 2-Tink reacher with vision 69% = 85% 92% \u2014 | 100% \u2014\nod half-cheetah 56% B% 19% | 86%\n\nObstacle 2-link reacher\n\nSuccess Rate\n\nreward regr\nRL policy\noracle\n\nLog Mass Wall Height\nFigure 3: Generalization capability of the obstacle, 2-link reacher, and half-cheetah tasks as a func-\ntion of the task variation. Performance for these tasks is averaged over 3 random seeds.\nand visualize the generalization performance in the 2-link reacher, cheetah, and obstacle tasks in\nFigure[3] The sample complexity of each method is reported in Appendix|B]\nIn the obstacle task, the results demonstrate that the reward functions learned using S3G actually\nproduce better generalization in some cases than learning on both the labeled and unlabeled MDPs\nwith full knowledge of the true reward function. While this may at first seem counterintuitive, this\nagrees with the observation in prior work[Guo et al] 2013) that the true reward function is not always\nthe best one when learning with limited samples, computational power, or representational capacity\n(i.e. because it is not sufficiently shaped). S3G also outperforms the oracle and reward regression in\nthe 2-link reacher task, indicating that the learned reward shaping is also beneficial in that task.\nWe presented the first method for semi-supervised reinforcement learning, motivated by real-world\nlifelong learning. By inferring the reward in settings where one is not available, S3G can improve\nthe generalization of a learned neural network policy trained only in the \u201clabeled\u201d settings. Ad-\nditionally, we find that, compared to using supervised regression to reward labels, we can achieve\nhigher performance using an inverse RL objective for inferring the reward underlying the agent\u2019s\nprior experience. Interestingly, this does not directly make use of the reward labels when inferring\nthe reward of states in the unlabeled MDPs, and our results on the obstacle navigation task in fact\nsuggest that the rewards learned with S3G exhibit better shaping.\nAs we discuss previously, the reward and policy optimization methods that we build on in this work\nare efficient enough to learn complex tasks with hundreds of trials, making them well suited fot\nIn all four tasks, the RL policy mg, generalizes worse than S3G, which demonstrates that, by using\nunlabeled experience, we can indeed improve generalization to different masses, target positions,\nand obstacle sizes. In the obstacle and both reacher tasks, S3G also outperforms reward regression,\nsuggesting that it is also useful to use unlabeled experience to learn the reward.\nFor the vision task, the visual features learned via RL in the labeled MDPs were used to initialize\nthe vision layers of the reward and policy. We trained the vision-based reacher with S3G with\nboth end-to-end finetuning of the visual features and with the visual features frozen and only the\nfully-connected layers trained on the unlabeled MDPs. We found performance to be similar in both\ncases, suggesting that the visual features learned with RL were good enough, though fine-tuning the\nfeatures end-to-end with the inverse RL objective did not hurt the performance.\nlearning on physical systems such as robots. Indeed, previous work has evaluated similar method\n\non real physical systems, in the context of inverse RL ) and vision-based polic\nlearning (Levine et al.|/2016). Thus, it is likely feasible to apply this method for semi-supervise\n\nreinforcement learning on a real robotic system. Applying S3G on physical systems has the potenti\nto enable real-world lifelong learning, where an agent is initialized using a moderate amount c\nlabeled experience in a constrained setting, such as a robot learning a skill for the first time in th\nlab, and then allowed to explore the real world while continuous improving its capabilities withou\nadditional supervision. This type of continuous semi-supervised reinforcement learning has th\npotential to remove the traditional distinction between a training and test phase for reinforcemer\nlearning agents, providing us with autonomous systems that continue to get better with use."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Anca Dragan for insightful discussions, and Aviv Tamar and Roberto\nCalandra for helpful feedback on the paper. Funding was provided by the NSF GRFP, the DARPA\nSimplex program, and Berkeley DeepDrive."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "-ieter Abbeel and Andrew Ng. Apprenticeship learning via inverse reinforcement learning. In Jnternationc\nConference on Machine Learning (ICML), 2004.\nJulien Audiffren, Michal Valko, Alessandro Lazaric, and Mohammad Ghavamzadeh. Maximum entropy semi-\nsupervised inverse reinforcement learning. International Joint Conference on Artificial Intelligence (IJCAI),\n2015.\nMarc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: A\nevaluation platform for general agents. Journal of Artificial Intelligence Research, 2012.\nColine Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, and Sergey Levine. Learning modular neura\nnetwork pol: s for multi-task and multi-robot transfer. arXiv preprint arXiv: 1609.07088, 2016.\nAnca Dragan, Geoffrey Gordon, and Siddhartha Srinivasa. Learning from experience in manipulation planning\nSetting the right goals. International Symposium on Experimental Robotics (ISER), 2011.\nJonathan Ho, Jayesh K. Gupta, and Stefano Ermon. Model-free imitation learning with policy optimization\nInternational Conference on Machine Learning (ICML), 2016.\nDiederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning\nwith deep generative models. In Neural Information Processing Systems (NIPS). 2014.\nDario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man\u00e9. Concrete prob\nlems in ai safety. arXiv preprint arXiv: 1606.06565, 2016.\nSamuel Barrett, Matt E. Taylor, and Peter Stone. Transfer learning for reinforcement learning on a physical\nrobot. In Ninth International Conference on Autonomous Agents and Multiagent Systems - Adaptive Learn-\ning Agents Workshop (ALA), 2010.\nGreg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech\nZaremba. Openai gym. arXiv preprint arXiv: 1606.01540, 2016.\n\u201chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy\noptimization. /nternational Conference on Machine Learning (ICML). 2016.\nXiaoxiao Guo, Satinder Singh, and Richard L Lewis. Reward mapping for transfer in long-lived agents. In\nNeural Information Processing Systems (NIPS), 2013.\nGeorge Konidaris and Andrew Barto. Autonomous shaping: Knowledge transfer in reinforcement learning\nInternational Conference on Machine Learning (ICML), 2006.\nWilliam Montgomery and Sergey Levine. Guided policy search as approximate mirror descent. Advances it\nNeural Information Processing Systems (NIPS), 2016.\nJunhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception\nand action in minecraft. International Conference on Machine Learnine (ICML). 2016.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel. Trust region policy}\noptimization. /nternational Conference on Machine Learning (ICML), 2015.\nMartin Stolle and Christopher G. Atkeson. Knowledge transfer using local features. Approximate Dynami\nProocramminoe and Reintorrement Il earnine (ADPRT ) 9007\nSebastian Thrun and Tom M Mitchell. Lifelong robot learning. Springer Berlin Heidelberg, 1995.\nEric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Pieter Abbeel, Sergey Levine, Kate Saenko, and\nTrevor Darrell. Adapting deep visuomotor representations with weak pairwise constraints. Workshop on the\nAloorithmic Foundations of Rohotics (WAFR) 2016\nXiaojin Zhu and Andrew B Goldberg. Introduction to semi-supervised learning. Morgan & Claypool, 2009.\nBrian Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. PhD\nthesis, Carnegie Mellon University, 2010.\nMiatthew E. Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal\nof Machine Learning Research (JMLR), 2009.\nAlex Teichman and Sebastian Thrun. Tracking-based semi-supervised learning. Robotics: Science and Systems\n(RSS), 2007."}, {"section_index": "9", "section_name": "MIRROR DESCENT GUIDED POLICY SEARCH", "section_text": "To optimize policies with S3G, we chose to use mirror-descent guided policy search (MDGPS), fot\nits superior sample efficiency over other policy optimization methods. MDGPS belongs to a class of\nguided policy search methods, which simplify policy search by decomposing the problem into two\nphases: a) a trajectory-centric RL phase (C-phase) and b) a supervised learning phase (S-phase).\nDuring the C-phase, a trajectory-centric RL method is used to train \u2019local\u201d controllers for each of\nM initial positions. In the S-phase, a global policy 7 (als) is trained using supervised learning to\nmatch the output of each of the local policies.\nTo produce local policies, we make use of the iterative linear quadratic regulator (iLQR) algorithm to\ntrain time-varying linear-Gaussian controllers. iLQR makes up for its weak representational power\nby being sample efficient under regimes where it is capable of learning. Usage of iLQR requires a\ntwice-differentiable cost function and linearized dynamics.\nIn order to fit a dynamics model, we use the recent samples to fit a gaussian mixture model (GMM.\non (s;,4;,5;41) tuples. We then use linear regression to fit time-varying linear dynamics of the form\n5:41 = Fs; + f; on local policy samples from the most recent iteration, using the clusters from the\nGMM as a normal-inverse Wishart prior.\nDuring the C-step, for each initial condition m, we optimize the entropy-augmented of the form,\nobjective constrained against the global policy:\nT\n4m = argmax Ey, p,,(s9) pa Sts \u00ab| (4) st. Dx (q||7%0) S\nq\nWhere R(5s;,a;) is a twice-differentiable objective such as L2-distance from a target state.\nThis optimization results in a local time-varying linear-Gaussian controller \u00a2m(s;|a;) ~ N (King St -\nk,,.,C,, +) which is executed to obtain supervised learning examples for the S-step."}, {"section_index": "10", "section_name": "B SAMPLE COMPLEXITY OF EXPERIMENTS", "section_text": "Table 2: Sample complexity of each experiment. This table records the total number of samples used\nto train policies in the labeled setting (RL and oracle), and the unlabeled setting (reward regression,\n\nS3G). The sample complexity of unlabeled experiments is denoted as (unlabeled samples + labeled\nsamples)\nMDGPS can be interpreted as an approximate variant of mirror-descent on the expected cost\nJ(@)=E1, Ex4(s,,a,)[\u2014R(s+,ar)] under policy\u2019s trajectory distribution, where 79 (s;,a;) denotes the\nmarginal of 29(t) = p(s1) Ty p(sr41|s1,ar)(ar|s;) and t = {s1,a1,...,87,a7} denotes the trajec-\ntory. In the C-phase, we learn new local policies for each initial position, and in the S-phase we\nproject the local policies down to a single global policy 7, using KL divergence as the distance\nmetric.\nBecause we use guided policy search to optimize the policy, we inherit its sample efficiency. In\nTable} we report the number of samples used in both labeled and unlabeled scenarios for all tasks\nand all methods. Note that the labeled samples used by the oracle are in from the \u201cunlabeled\u201d MDPs\nU, where we generally assume that reward labels are not available."}]
SkkTMpjex
[{"section_index": "0", "section_name": "DISTRIBUTED SECOND-ORDER OPTIMIZATION USING\nKRONECKER-FACTORED APPROXIMATIONS", "section_text": "Roger Grosse\nJimmy Ba\nUniversity of Toronto\nUniversity of Toronto\njimmy@psi.toronto.edu\nAs more computational resources become available, machine learning researchers\ntrain ever larger neural networks on millions of data points using stochastic gradi-\nent descent (SGD). Although SGD scales well in terms of both the size of dataset\nand the number of parameters of the model, it has rapidly diminishing returns as\nparallel computing resources increase. Second-order optimization methods have\nan affinity for well-estimated gradients and large mini-batches, and can therefore\nbenefit much more from parallel computation in principle. Unfortunately, they\noften employ severe approximations to the curvature matrix in order to scale to\nlarge models with millions of parameters, limiting their effectiveness in practice\nversus well-tuned SGD with momentum. The recently proposed K-FAC method\n5) uses a stronger and more sophisticated curvature ap-\nproximation, and has been shown to make much more per-iteration progress than\nSGD, while only introducing a modest overhead. In this paper, we develop a ver-\nsion of K-FAC that distributes the computation of gradients and additional quan-\ntities required by K-FAC across multiple machines, thereby taking advantage of\nthe method\u2019s superior scaling to large mini-batches and mitigating its additional\noverheads. We provide a Tensorflow implementation of our approach which is\neasy to use and can be applied to many existing codebases without modification.\nAdditionally, we develop several algorithmic enhancements to K-FAC which can\nimprove its computational performance for very large models. Finally, we show\nthat our distributed K-FAC method speeds up training of various state-of-the-art\nImageNet classification models by a factor of two compared to an improved form\n\nof Batch Normalization (Ioffe and Szegedy} |2015)."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Second-order optimization methods, which use second-order information to construct updates that\naccount for the curvature of objective function, represent a promising alternative. The canonical\nsecond-order methods work by inverting a large curvature matrix (traditionally the Hessian), but\nthis doesn\u2019t scale well to deep neural networks with millions of parameters. Various approximations\nto the curvature matrix have been proposed to help alleviate this problem, such as diagonal\netal] (1958) 2014).\nand low-rank ones (\n\nBerahas}|2015 |\nUniversity of Toronto\nand Google DeepMind"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Current state-of-the-art deep neural networks (Szegedy et al.| 2014} Krizhevsky et al. 2012} [He]\n\noften require days of training time with millions of training cases. The typical strategy\nto speed-up neural network training is to allocate more parallel resources over many machines and\ncluster nodes (Dean et al.||2012). Parallel training also enables researchers to build larger models\nwhere different machines compute different splits of the mini-batches. Although we have improved\nour distributed training setups over the years, neural networks are still trained with various simple\nfirst-order stochastic gradient descent (SGD) algorithms. Despite how well SGD scales with the\nsize of the model and the size of the datasets, it does not scale well with the parallel computation\nresources. Larger mini-batches and more parallel computations exhibit diminishing returns for SGD\nand related algorithms."}, {"section_index": "3", "section_name": "strategy i is to use Krylov- rely wien methods and inset Poe matrix- vector St (Risk Tael to avoid", "section_text": "The usual problem with curvature approximations, especially low-rank and diagonal ones, is that\nthey are very crude and only model superficial aspects of the true curvature in the objective function.\nKrylov-subspace methods on the other hand suffer because they still rely on Ist-order methods to\ncompute their updates.\nMore recently, several approximations have been proposed based on statistical approximations of the\n\nFisher information matrix (Heskes| [2000] [Ollivier| 2013} 2015} [Povey\net al.|}2015}|Desjardins et al.||2015). In the K-FAC approach (Martens and Grosse} |2015} |Gross\u00a2\n2016), these approximations result in a block-diagonal approximation to the Fishe:\ninformation matrix (with blocks corresponding to entire layers) where each block is approximatec\nas a Kronecker product of two much smaller matrices, both of which can be estimated and invertec\nfairly efficiently. Because the inverse of a Kronecker product of two matrices is the Kronecke!\nproduct of their inverses. this allows the entire matrix to be inverted efficiently.\n(2015) found that K-FAC scales very favorably to larger mini-batches compared\nto SGD, enjoying a nearly linear relationship between mini-batch size and per-iteration progress for\nmedium-to-large sized mini-batches. One possible explanation for this phenomenon is that second-\norder methods make more rapid progress exploring the error surface and reaching a neighborhood of\na local minimum where gradient noise (which is inversely proportional to mini-batch size) becomes\nthe chief limiting factor in convergencd\"] This observation implies that K-FAC would benefit in\nparticular from a highly parallel distributed implementation.\nIn this paper, we propose an asynchronous distributed version of K-FAC that can effectively ex.\nploit large amounts of parallel computing resources, and which scales to industrial-scale neural ne\nmodels with hundreds of millions of parameters. Our method augments the traditional distributec\nsynchronous SGD setup with additional computation nodes that update the approximate Fisher anc\ncompute its inverse. The proposed method achieves a comparable per-iteration runtime as a norma\nSGD using the same mini-batch size on a typical 4 GPU cluster. We also propose a \u201cdoubly fac-\ntored\u201d Kronecker approximation for layers whose inputs are feature maps that are normally too large\nto handled by the standard Kronecker-factored approximation. Finally, we empirically demonstrate\nthat the proposed method speeds up learning of various state-of-the-art ImageNet models by a facto:\n\nof two over Batch Normalization (Ioffe and Szegedy}|2015\n\"]\n[vec{ DW} vec{DW }\nF= E,\n\nx,y\nwhere P is the distribution over the input x and the network\u2019s distribution over targets y (implied\nby the log-likelihood objective). Throughout this paper we assume, unless otherwise stated, that\nexpectations are taken with respect to P (and not the training distribution over y).\nK-FAC 2016) uses a Kronecker-factored approx-\nimation to each block which we now describe. Denote the input activation vector to the layer as\nA \u20ac RC, the pre-activation inputs as s = W.A and the back-propagated loss derivatives as\nDs = 2\u00a3 \u20ac RCout, Note that the gradient of the weights is the outer product of the input acti-\n\nds\n\nvation and back-propagated derivatives DW = DsA'. K-FAC approximates the Fisher block as a\n'Mathematical evidence for this idea can be found in , where it is shown that (conve:\nquadratic) objective functions decompose into noise-dependent and independent terms, and that second-orde\nmethods make much more rapid progress optimizing the noise-independent term compared to SGD, while have\nno effect on the noise-dependent term (which shrinks with the size of the mini-batch)\nLet DW be the gradient of the log likelihood \u00a3 of a neural network w.r.t. some weight matrix\nW \u20ac RO Cin in a layer, where C;,, Coy, are the number of input/output units of the layer. The\nblock of the Fisher information matrix of that layer is given by:\nKronecker product of the second-order statistics of the input and the backpropagated derivatives:\nF =E[vec{DW} vec{DW}\"] =E [4a \u00ae DsDs\"| xE[AA'] @E [DsDs\" | APF,\nThis approximation can be interpreted as making the assumption that the second-order statistics of\nthe activations and the backpropagated derivatives are uncorrelated.\nThe natural gradient ( is defined as the inverse of the Fisher times the gradient. It is\ntraditionally interpreted as the direction in parameter space that achieves the largest (instantaneous)\nimprovement in the objective per unit of change in the output distribution of the network (as mea-\nsured using the KL-divergence). Under certain conditions, which almost always hold in practice,\nit can also be interpreted as a second-order update computed by minimizing a local quadratic ap-\nproximation of the log-likelihood objective, where the Hessian is approximated using the Fisher\n\n2014).\nv= (F + at) vec{Gw } \u00a9 vec {(e [AA\u2122] + maMl) Gw (E [DsDs\"| + np.Att) '} .\nwhich amounts to several matrix inversion of multiplication operations involving matrices roughly\nthe same size as the weight matrix W."}, {"section_index": "4", "section_name": "3 DISTRIBUTED OPTIMIZATION USING K-FAC", "section_text": "Stochastic optimization algorithms benefit from low-variance gradient estimates (as might be ob-\ntained from larger mini-batches). Prior work suggests that approximate natural gradient algorithms\nmight benefit more than standard SGD from reducing the variance 2\n[Grosse and Martens] (2016) One way to efficiently obtain low-variance gradient estimates is to par-\nallelize the gradient computation across many machines in a distributed system (thus allowing large\nmini-batches to be processed efficiently). Because the gradient computation in K-FAC is identical to\nthat of SGD. we parallelize the gradient computation using the standard synchronous SGD model.\nHowever, K-FAC also introduces other forms of overhead not found in SGD \u2014 in particular, estima-\ntion of second-order statistics and computation of inverses or eigenvalues of the Kronecker factors.\nIn this section, we describe how these additional computations can be performed asynchronously.\nWhile this asynchronous computation introduces an additional source of error into the algorithm,\nwe find that it does not significantly affect the per-iteration progress in practice. All in all, the per-\niteration wall clock time of our distributed K-FAC implementation is only 5-10% higher compared\nto synchronous SGD with the same mini-batch size."}, {"section_index": "5", "section_name": "3.1 ASYNCHRONOUS FISHER BLOCK INVERSION", "section_text": "Computing the parameter updates as per Eq|3]requires the estimated gradients to be multiplied by\nthe inverse of the smaller Kronecker factors. This requires periodically computing (typically) either\ninverses or eigendecompositions of each of these factors. While these factors typically have sizes\nTo compute the approximate natural gradient in K-FAC, one multiplies the gradient for the weights\nof each layer by the inverse of the corresponding approximate Fisher block F for that layer. Denote\nthe gradient of the loss function with respect to the weights W by Gw \u20ac ROm*Cout, We will\nassume the use of the factorized Tikhonov damping approach described by\n(2015), where the addition of the damping term AJ to F is approximated by adding 74A2J to\nE [AA\u2122] and TpsA21 toE|DsDs! , where 74 and 7p, are adjustment factors that are described\n\nin detail and generalized in Sec.[4-T] (Note that one can also include the contribution to the curvature\nfrom any L2 regularization terms with 2.)\nBy exploiting the basic identities (A @ B)~! = (A~! @ B~') and (A@ B) vec(C) = vec(BCA'),\nthe approximate natural gradient update v can then be computed as:\nonly in the hundreds or low thousands, very deep networks may have hundreds of such matrice:\n(2 or more for each layer). Furthermore, matrix inversion and eigendecomposition see little benefi\nfrom GPU computation, so they can be more expensive than standard neural network operations\nFor these reasons, inverting the approximate Fisher blocks represents a significant computationa\ncost.\nIt has been observed that refreshing the inverse of the Fisher blocks only occasionally and usin;\nstale values otherwise has only a small detrimental effect on average per-iteration progress, perhap\nbecause the curvature changes relatively slowly (2015). We push this a ste]\nfurther by computing the inverses asynchronously while the network is still training. Because the re\nquired linear algebra operations are CPU-bound while the rest of our computations are GPU-bound\nwe perform them on the CPU with little effective overhead. Our curvature statistics are somewha\nmore stale as a result, but this does not appear to significantly affect per-iteration optimization per\nformance. In our experiments, we found that computing the inverses asynchronously usually offere:\na 40-50% speed-up to the overall wall-clock time of the K-FAC algorithm.\nThe other major source of computational overhead in K-FAC is the estimation of the second-order\nstatistics of the activations and derivatives, which are needed for the Kronecker factors. In the stan-\ndard K-FAC algorithm, these statistics are computed on the same mini-batches as the gradients,\nallowing the forward pass computations to be shared between the gradient and statistics computa-\ntions. By computing the gradients and statistics on separate mini-batches, we can enable a higher\ndegree of parallelism, at the expense of slightly more total computational operations. Under this\nscheme, the statistics estimation is independent of the gradient computation, so it can be done on\none or more separate worker nodes with their own independent data shards. These worker nodes\nreceive parameters from the parameter server (just as in synchronous SGD) and communicate statis-\ntics back to the parameter server. In our experiments, we assigned at most one worker to computing\nstatistics.\nIn cases where it is undesirable to devote separate worker nodes to computing statistics, we als\u00a2\nintroduce a fast approximation to the statistics for convolution layers (see Appendix[A).\nparameters\n\ngradient\nworker\n\nE[AAT]\"\"GwE[D.D7]-!\n\ngradient\n\nparameter\nserver\n\nU\n\ngradient\nworker\n\nU\n\nEAA]!\nE[D.Dy]*\n\ncompute\ninverses\n\nU\n\nstats\nworker\n\nU\nFigure 1: The diagram illustrates the distributed computation of K-FAC. Gradient workers (blue)\ncompute the gradient w.r.t. the loss function. Stats workers (grey) compute the sampled second-\norder statistics. Additional workers (red) compute inverse Fisher blocks. The parameter server\n(orange) uses gradients and their inverse Fisher blocks to compute parameter updates."}, {"section_index": "6", "section_name": "4 DOUBLY-FACTORED KRONECKER APPROXIMATION FOR LARGE\nCONVOLUTION LAYERS", "section_text": "Computing the standard Kronecker factored Fisher approximation for a given layer involves opera-\ntions on matrices whose dimension is the number of input units or output units. The cost of these\noperations is reasonable for most fully-connected networks because the number of units in each layer\nrarely exceeds a couple thousand. Large convolutional neural networks, however, often include a\nfully-connected layer that \u201cpools\u201d over a large feature map before the final softmax classification.\nFor instance, the output of the last pooling layer of AlexNet is of size 6 x 6 x 256 = 9216, which\nthen provides inputs to the subsequent fully connected layer of 4096 ReLUs. VGG models also\nshare a similar architecture. For the standard Kronecker-factored approximation one of the factors\nwill be a matrix of size 9216 x 9216, which is too expensive to be explicitly inverted as often as is\nneeded during training.\nIn this section we propose a \u201cdoubly-factored\u201d Kronecker approximation for layers whose input is\na large feature map. Specifically, we approximate the second-order statistics matrix of the inputs as\nitself factoring as a Kronecker product. This gives an approximation which is a Kronecker product\nof three matrices.\nUsing the AlexNet example, the 9216 x 4096 weight matrix in the first fully connected layer is\nequivalent to a filterbank of 4096 filters with kernel size 6 x 6 on 256 input channels. Let A be a\nmatrix of dimension T-by-C;,, representing the input activations (for a single training case), where\nT = Kw X Ky, is the feature map height and width, and C;,, is the number of input channels. The\nFisher block for such a layer can be written as:\nElvec{DW} vec{DW}\"] = E[vec{ A} vec{A}\" @DsDs\"], AE RTC,\nWe begin be making the following rank-1 approximation:\nwhere K \u20ac R\u2019, UW \u20ac R\u201c are the factors along the spatial location dimension and the input channe\ndimension. The optimal solution of a low-rank approximation under the Frobenius norm is givet\nby the singular value decomposition. The activation matrix A is small enough that its SVD can bi\ncomputed efficiently. Let 01, wi, v; be the first singular value and its left and right singular vector:\nof the activation matrix A, respectively. The factors of the rank-1 approximation are then chosen t\nbe K = /orus and VW = V/oi01. K captures the activation patterns across spatial locations in ;\nfeature map and W captures the pattern across the filter responses. Under the rank-1 approximatiot\nof A we have:\nElvec{ A} vec{A}' @ DsDs\"] \u00a9 E[vec{KU\"} vec{KW'}' @ DsDs*\n=E[KK' @ WW\" @DsDs\"}.\nWe further assume the second order statistics are three-way independent between the loss derivatives\nDs, the activations along the input channels W, and the activations along spatial locations K:\nElvec{DW} vec{DW}\"] \u00a9 RIKK'] @ E[WU'] @ E[DsDs\"].\nElvec{DW} vec{(DW}\"] \u00a9 E[KK'] @ E[VU\"] @ E[DsDs\"].\nThe approximate natural gradient for this layer can then be computed by multiplying the inverse:\nof each of the smaller matrices against the respective dimensions of the gradient tensor. We define\na function R; : R&*42%4s _, R4d\u00ab*4i that constructs a matrix from a 3D tensor by \u201creshap:\ning\u201d it so that the desired target dimension i \u20ac {1,2,3} maps to columns, while the remaining\ndimensions (j and k) are \u201cfolded together\u201d and map to the rows. Given the gradient of the weights\nGw \u20ac R7*CinXCout we can compute the matrix-vector product with the inverse double-factorec\nKronecker approximated Fisher block as:\nRy} (E[DsDs\"|'Rs (Rz? (E [wwt)- IRR P(E E[KK\"]|~ 'Ri(Gw))) ))).\nAxkw',\nThe final approximated Fisher block is a Kronecker product of three small matrices. And note that\nalthough we assumed the feature map activations have low-rank structure, the resulting approxi-\nmated Fisher is not low-rank.\nIn second-order optimization methods, \u201cdamping\u201d performs the crucial task of correcting for the\ninaccuracies of the local quadratic approximation of the objective that is (perhaps implicitly) op-\ntimized when computing the update (Martens and Sutskever} 2012} Martens 2014] e.g.). In the\nwell-known Tikhonov damping/regularization approach, one adds a multiple of the identity AJ tc\nthe Fisher before inverting it (as one also does for L2-regularization / weight-decay), which roughly\ncorresponds to imposing a spherical trust-region on the update.\nThe inverse of a Kronecker product can be computed efficiently as the Kronecker product of the\ninverse of its factors. Adding a multiple of the identity complicates this computation (although it\ncan still be performed tractably using eigendecompositions). The \u201cfactored Tikhonov damping\u201d\ntechnique proposed in (Martens and Grosse} /2015) is appealing because it preserves the Kronecker\nstructure of the factorization and thus the inverse can still be computed by inverting each of the\nsmaller matrices (and avoiding the more expensive eigendecomposition operation). And in our\nexperiments with large ImageNet models, we also observe the factored damping seems to perform\nbetter in practice. In this subsection we derive a generalized version of factored Tikhonov damping\nfor the double-factored Kronecker approximation.\nSuppose we wish to add AJ to our approximate Fisher block A \u00ae B \u00ae C. In the factored Tikhonov\nscheme this is approximated by adding mA? LT, mATI, and TeART to A, B and C respectively, for\nnon-negative scalars 7,, 7 and 7, satisfying 7,77 = 1. The error associated with this approxi-\nmation is:\n(A+ mA81) @ (B+ mATT) @(C+7-AF1) \u2014(A@BAC+A)\n=n. AFI @ASB+EMATI@AQC4+AMISBAC\n+ reA5T@ MATT @ A+ AST @ TaAST @ B+ MaAFL @ MAIL @C\nThe doubly factored Kronecker approximation provides a computationally feasible alternative to the\nstandard Kronecker-factored approximation for layers that have a number of parameters in the order\nof hundreds of millions. For example, inverting it for the first fully connected layer of AlexNet takes\nabout 15 seconds on an 8 core Intel Xeon CPU, and such time is amortized in our asynchronous\nalgorithm.\nUnfortunately, the homogeneous coordinate formulation is no longer applicable under this new ap-\nproximation. Instead, we lump the bias parameters together and associate a full Fisher block with\nthem, which can be explicitly computed and inverted since the number of bias parameters per layer\nis small.\n(A+ mA81) @ (B+ mATT) @(C+7-AF1) \u2014(A@BAC+A)\n=n. AFI @ASB+EMATI@AQC4+AMISBAC\n4+n2A31@ mATLOAL TAIT @ TART OB+t7T ATI Q MAIL@C\nFollowing |Martens and Grosse] (2015), we choose 7, 7 and 7, by taking the nuclear norm in\nEq. [land minimizing its triangle inequality-derived upper-bound. Note that the nuclear norm of\n\nKronecker products is the product of the nuclear norms of each individual matrices: ||A @ B]|,, =\n|| Al|,.||B||,. This gives the following formula for the value of 74\nwhere the d\u2019s are the number of rows (equiv. columns) of the corresponding Kronecker factor ma-\ntrices. The corresponding formulae for 7, and 7, are analogous. Intuitively, the Eq. [12] rescales\nthe contribution to each factor matrix according to the geometric mean of the ratio of its norm vs\nthe norms of the other factor matrices. This results in the contribution being upscaled if the factor\u2019s\nnorm is larger than averaged norm, for example. Note that this formula generalizes to Kronecker\nproducts of arbitrary numbers of matrices as the geometric mean of the norm ratios.\nAlthough[Grosse and Martens] (2016) found that Polyak averaging (Polyak and Juditsky} {1992} ob-\nviated the need for tuning learning rate schedules on some problems, we observed the choice ot\nlearning rate schedules to be an important factor in our ImageNet experiments (perhaps due to higher\nstochasticity in the updates). On ImageNet, it is common to use a fixed exponential decay schedule\n(Szegedy et al.|[2014} |2015). As an alternative to learning rate schedules, we instead use curvature\ninformation to control the amount by which the predictive distribution is allowed to change afte:\neach update. In particular, given a parameter update vector v, the second-order Taylor approxima-\ntion to the KL divergence between the predictive distributions before and after the update is giver\nby the (squared) Fisher norm:\nThis quantity can be computed with a curvature-vector product (Schraudolph]|2002). Observe that\n\nchoosing a step size of 7 will produce an update with squared Fisher norm 7\u00b0 v' F'v. Instead of\nusing a learning rate schedule, we choose 77 in each iteration such that the squared Fisher norm is at\nmost some value c:\nsrosse and Martens] (2016) (2016) used this method to clip updates at the start of training, but we foun\nt useful to use it Throughout training. We use an exponential decay schedule c, = co\u00a2*, wher\n\u20189 and \u00a2 are tunable parameters, and k is incremented periodically (every half an epoch in ou\nmageNet experiments). Shrinking the maximum changes in the model prediction after each updat\ns analogous to shrinking the trust region of the second-order optimization. In practice, computin,\n-urvature-vector products after every update introduces significant computational overhead, so w\nnstead used the approximate Fisher Fin place of F\u2019, which allows the approximate Fisher norm t\nye computed efficiently as vifv= v F(E\"Gy) =v! Gy. The maximum step Size Nmax Wa\net to a large value, and in practice this maximum was reached only at the beginning of training\nvhen F\u2019 was small in magnitude. We found this outperformed simple exponential learning rat\nlecay on ImageNet experiments (see Appendix|B).\nDue to computational resource constraints, we used a single GPU server with 8 Nvidia K80 GPUs\nto simulate a large distributed system. The GPUs were used as gradient workers that computed the\ngradient over a large mini-batch, with the CPUs acting as a parameter server. The Fisher block\ninversions were performed on the CPUs in parallel, using as many threads as possible. The second-\norder statistics required for the various Fisher block approximations were computed either syn-\ncronously by the gradient workers after each gradient computation (CIFAR-10 experiments), o1\nasynchronously using a separate dedicated \u201c\u2018stats worker\u201d (ImageNet experiments).\nSimilarly to |Martens and Grosse! (2015), we applied an exponentially decayed Polyak averagins\nscheme to the sequence of output iterates produced by each method. We found this improved thei\n\nconvergence rate in the later stages of optimization, and reduced or eliminated the need to decay the\nlearning rates.\nWe chose to base our implementation of distributed K-FAC on the TensorFlow framework\nbecause it provides well-engineered and scalable primitives for distributed computation.\nWe implement distributed K-FAC in TensorFlow by scanning the gradient-computing graph for\ngroups of parameters whose gradient computations have particular structures. Having identified such\n\ngroups we compute/approximate their Fisher blocks using a method tailored to the type of structure\n1+\nDxulal|p] \u00a9 a\u201d Fv\n. / oc )\n7) = main (ros viFv\nMeta-parameters such as learning rates, damping parameters, and the decay-rate for the second-\norder statistics, were optimized carefully by hand for each method. The momentum was fixed to\n0.9.\nNLL\n\nNLL\n\n1s\nLal\n12\n1.9}\nos\n06\n04\n\n0.30\n\n0.10)\n\n0.05)\n\ndist. K-FAC async gpul 0.25\ndist.K-FAC async gpu4| 20\ndist.K-FAC sync gpul a\ndist.K-FAC sync gpu4 G 0.15)\n0.10\nsna 0.05\nee a 000 At sNAtananeneresesenapencces es\n\u00b0 500 1000 15002000 2500 3000 0 500 1000 1500 2000 2500 3000\nUpdates Updates\n0.30\n0.25\n0.20\n\u00a7 0.15\n\n0.00)\n0\n\n400 500, 600\nFigure 2: The results from our CIFAR-10 experiment looking at the effectiveness of asynchronously\ncomputing the approximate Fisher inverses. gpu indicates the number of gradient workers. Dashed\nlines denote training curves and solid lines denote test curves. Top row: cross entropy loss and\nclassification error vs the number of updates. Bottom row: cross entropy loss and classification\nerror vs wallclock time.\nobserved. See Appendix|C]for details. This type of implementation can be applied to existing model-\nspecification code without significant modification of said code. And because TensorFlow\u2019s parallel\nprimitives were designed with scalability in mind, it should be possible to scale our implementation\nto a larger distributed system with hundreds of workers."}, {"section_index": "7", "section_name": "5.1 CIFAR-10 CLASSIFICATION AND ASYNCHRONOUS FISHER BLOCK INVERSION", "section_text": "In our first experiment we evaluated the effectiveness of asynchronously computing the approximat\nFisher inverses (as described in Sectio! . We considered the effect that this has both on th\nquality of the updates, as measured by per-iteration progress on the objective, and on the averag\nper-iteration wall-clock time.\nThe task is to train a basic convolutional network model on the CIFAR-10 image classification\ndataset (Krizhevsky and Hinton 2009). The model has 3 convolutional layers of 32-32-64 filters.\neach with a receptive field size of 5x5, followed by a softmax layer that predicts 10 classes. This is\n\na similar but not identical CIFAR-10 model that was used by {Grosse and Martens 2016). All the\nCIFAR-10 experiments use a mini-batch size of 512.\nThe baseline method is a simple synchronous version of distributed K-FAC with a fixed learning\nrate, and up to 4 GPUs acting as gradient and stats workers, which recomputes the inverses of the\napproximate Fisher blocks once every 20 iterations. This baseline method behaves similarly to the\nimplementation of K-FAC in (2016), while being potentially faster due to its\ngreater use of parallelism. We compare this baseline to a version of distributed K-FAC where the\napproximate Fisher blocks are inverted asynchronously and in parallel with the rest of the optimiza-\ntion process. Note that under this scheme, inverses are updated about once every 16 iterations for\nthe single GPU condition, and every 30 iterations for the four GPU condition. For networks larger\nthan this relatively small CIFAR-10 net they may get updated (far) less often (e.g. the AlexNet\nexperiments in Section .\nThe results of this first experiment are plotted in Fig.|2} We found that the asynchronous versiot\niterated about 1.5 times faster than the synchronous version, while its per-iteration progress remainec\ncomparable. The plots show that the asynchronous version is better at taking advantage of paralle\ncomputation and displayed an almost linear speed-up as the number of gradient workers increase:\nto 4. In terms of the wall-clock time, using only 4 GPUs the asynchronous version of distributec\nK-FAC is able to complete 700 iterations in under a minute, where it achieves the minimum tes\nerror (19%)."}, {"section_index": "8", "section_name": "6.2 IMAGENET CLASSIFICATION", "section_text": "In our second set of experiments we benchmarked distributed K-FAC against several other popular\napproaches, and considered the effect of mini-batch size on per-iteration progress. To do this we\ntrained various off-the-shelf convnet architectures for image classification on the ImageNet dataset\n6\n\nCrossEntropy\n\n6\n\nCrossEntropy\n\nwee\nnow\n\non\n\nwee\nwow\n\nono\n\n0.50\nSGD+BN bz256 rbz128 04s\nSGD+BN bz256 rbz32 \u00b0\ndist.K-FAC bz256 0-40\ndist.K-FAC+BN bz256 | 3s\n0.30\n0.25!\noO 2 4 6 8 lo 12 14 16 oO 2 4 6 8 lo 12 14 16\nUpdates x 1e+04 Updates x 1e+04\n0.50\n0.45)\n\u2018a 0.40\n&\n0.35)\n0.30\n0.25!\noO 13.9 27.8 417 55.6 69.4 83.3 97.2 oO 13.9 278 417 55.6 69.4 83.3 97.2\nFigure 3: Optimization performance of distributed K-FAC and SGD training GoogLeNet on Ima-\ngeNet. Dashed lines denote training curves and solid lines denote validation curves. bz indicates the\nsize of mini-batches. rbz indicates the size of chunks used to assemble the BN updates. Top row:\ncross entropy loss and classification error v.s. the number of updates. Bottom row: cross entropy\nloss and classification error vs wallclock time (in hours). All methods used 4 GPUs, with distributed\nK-FAC using the 4-th GPU as a dedicated asynchronous stats worker."}, {"section_index": "9", "section_name": "(Russakovsky et al.||2015): AlexNet (Krizhevsky et al.||2012), GoogLeNet InceptionV1 (Szegedy|\n2015).\n\n(2014) and the 50-layer Residual network (", "section_text": "Despite having 1.2 million images in the ImageNet training set, a data pre-processing pipeline 1\nalmost always used for training ImageNet that includes image jittering and aspect distortion. W\nused a less extensive dataset augmentation/pre-processing pipeline than is typically used for Ima\ngeNet, as the purpose of this paper is not to achieve state-of-the-art ImageNet results, but rathe\nto evaluate the optimization performance of distributed K-FAC. In particular, the dataset consist\nof 224x224 images and during training the original images are first resized to 256x256 and thei\nrandomly cropped back down to 224x224 before being fed to the network. Note that while it i\ntypically the case that validation error is higher than training error, this data pre-processing pipelin\nfor ImageNet creates an augmented training set that is more difficult than the undistorted validatioi\nset and therefore the validation error is often lower than the training error during the first 90% o\ntraining. This observation is consistent with previously published results (He et al.|{2015).\nIn all our ImageNet experiments, we used the cheaper Kronecker factorization from Appendix [Al\nand the KL-based step sized selection method described in Section [5] with parameters co = 0.01\nand \u00a2 = 0.96. The SGD baselines use an exponential learning rate decay schedule with a decay\nrate of 0.96. Decaying is applied after each half-epoch for distributed K-FAC and SGD+Batch\nNormalization, and after every two epochs for plain SGD, which is consistent with the experimental\n\nsetup of {Ioffe and Szegedy|(2015"}, {"section_index": "10", "section_name": "6.2.1 GOOGLELENET AND BATCH NORMALIZATION", "section_text": "Batch Normalization (loffe and Szegedy}|2015) is a reparameterization of neural networks that car\n\nmake them easier to train with first-order methods, and has been successfully applied to large Ima.\ngeNet models. It can be thought of as a modification of the units of a neural network so that eact\none centers and normalizes its own raw input over the current mini-batch (or subset thereof), afte:\nwhich it applies a separate shift and scaling operation via its own local \u201cbias\u201d and \u201cgain\u201d parameter:\n(which are optimized). These shift and scaling operations can learn to effectively undo the center.\ning and normalization, thus preserving the class of functions that the network can compute. Batct\nNormalization (BN) is closely related to centering techniques (Schraudolph\\|1998), and likely helps\nfor the same reason that they do, which is that the alternative parameterization gives rise to los:\nsurfaces with more favorable curvature properties. The main difference between BN and traditiona\ncentering is that BN makes the centering and normalization operations part of the model insteac\nof the optimization algorithm (and thus \u201cbackprops\u201d through them when computing the gradient)\nwhich helps stabilize the optimization.\nWithout any changes to the algorithm, distributed K-FAC can be used to train neural networks that\nhave BN layers. The weight-matrix gradient for such layers has the same structure as it does for\nstandard layers, and so Fisher blocks can be approximated using the same set of techniques. The\n0.70\n\u2014 SGD bz2048 0.65\n\u2014 SGD+BN bz2048 rbz256\n\u2014 dist.K-FAC bz2048\n\n0.60)\n0.55\n\nErr.\n\n0.50)\n\nCrossEntropy\n\n0.45,\n\n. 0.40!\no o5 2 15 2 #25 3 35 4 45 o o5 2 15 #2 2 3 35 4 45\n\nUpdates x le +04 Updates x 1e-+04\n\n0.70\n\n0.65\n\n0.60)\n0.55\n\nErr.\n\n0.50)\n\nCrossEntrop\\\n\n0.45,\n\n1.0 0.40!\n0 5.6 Til 167 222 278 33.3 0 5.6 Tl 167 222 278 33.3\n\nhours, hours.\nFigure 4: Optimization performance of distributed K-FAC and SGD training AlexNet on ImageNet.\nDashed lines denote training curves and solid lines denote validation curves. bz indicates the size\nof the mini-batches. rbz indicates the size of chunks used to assemble the BN updates. Top row:\ncross entropy loss and validation error vs the number of updates. Bottom row: cross entropy loss\nand validation error vs wallclock time (in hours). All methods used 8 GPUs, with distributed K-FAC\nusing the 8-th GPU as a dedicated asynchronous stats worker.\nper-unit gain and bias parameters cause a minor complication, but because they are relatively few i\nnumber, one can compute an exact Fisher block for each of them.\nComputing updates for BN networks over large mini-batches is usually done by splitting the mini\nbatch into chunks of size 32, computing the gradients separately for these chunks (using only the\ndata in the chunk to compute the mean and variance statistics), and then summing them together\nUsing small sample sets to compute the statistics like this introduces additional stochasticity into the\nBN update that acts as a regularizer, but can also hurt optimization performance. To help decoupk\nthe effect of regularization and optimization, we also compared to a BN baseline that uses large\nchunks. We found using larger chunks can give a factor of 2 speed-up in optimization performanc\u00ab\nover the standard BN baseline. In our figures rbz will indicate the chunk size, which defaults 32 i\nleft unspecified.\nIn Fig.|3| we compare distributed K-FAC to SGD on GoogLeNet with and without BN. All methods\nused 4 GPUs, with distributed K-FAC using the 4-th GPU as a dedicated asynchronous stats worker.\nFor the simplicity of our discussion, distributed K-FAC is not combined with BN in the the rest of the\nexperiments, as we are chiefly interested in evaluating optimization performance, not regularization\nand BN doesn\u2019t seem to provide any additional benefit to distributed K-FAC in regards to the former\nNote that this is not too surprising, given that K-FAC is provably invariant to the kind of centering\n\nand normalization transformations that BN does (Martens and Grosse} |2'\nTo demonstrate that distributed K-FAC can efficiently optimize models with very wide layers we\ntrain AlexNet using distributed K-FAC and compare to SGD+BN. The doubly-factored Kronecker\napproximation proposed in Section |A]is applied to the first fully-connected layer of AlexNet, which\nhas 9216 input units and is thus too wide for the standard Kronecker approximation to be feasible.\nNote that even with this addtional approximation, computing all of the Fisher block inverses for\nAlexNet is very expensive, and in our experiments they only get updated once every few hundred\niterations by our 16 core Xeon 2.2Ghz CPU.\nThe results from this experiment are plotted in Fig./4| They show that Distributed K-FAC still works\nwell despite potentially extreme staleness of the Fisher block inverses, speeding up training by a\nfactor of 1.5 over the improved SGD-BN baseline.\nWe observe that the per-iteration progress made by distributed K-FAC on the training objective is not\nsignificantly affected by the use of BN. Moreover, distributed K-FAC is 3.5 times faster than SGD\nwith standard BN baseline (orange line) and 1.5-2 times faster than the enhanced BN baseline (blue\nline). BN, however, does help distributed K-FAC generalize better, likely due to its aforementioned\nregularizing effect.\ny\n\nCrossEntrop)\n\nCrossEntropy\n\nwousus\n\n4.0|\n\nnous\n\n3 \u2014 SGD+BN bz512 rbz64\n: \u2014 dist.K-FAC bz512\n\nErr.\n\nUpdates x 1e-+04\n\nUpdates x 1e+04\n\nErr.\n\n13.9\n\n417 55.6 694 633 97.2\nCrossEntropy\n\nSGD+BN bz1024\n\n0.50,\n\n24\n\u2014 SGD+BN bz2048\n32 \u2014 SGD+BN bz256 oas\n\u2014 dist K-FAC bz1024|\n\u2014 dist K-FAC bz2048|\n20 dist.K-FAC bz256\n= 0.40\nxd 5\n1.6| F 0.35)\n14\n0.30\n12\n1d 0.25!\n\u00b0 15 20 30 20 30 0 10 20 30 20\n\n\u201cbovammelacamevmen \u00ab VeaNe\n\n\u201cbeavammala camermmen w Vea Ne\n\n50\nFigure 6: The comparison of distributed K-FAC and SGD on per training case progress on training\nloss and errors. The experiments were conducted using GoogLeNet with various mini-batch sizes.\nIn recent years very deep convolutional architectures have been successfully applied to ImageNet\nclassification. These networks are particularly challenging to train because the usual difficulties as-\nsociated with deep learning are especially severe. Fortunately second-order optimization is perhaps\nideally suited to addressing these difficulties in a robust and principled way (Martens}|2010).\nTo investigate whether distributed K-FAC can scale to such architectures and provide useful ac-\nceleration, we compared it to SGD+BN using the 50 layer ResNet architecture\nThe results from this experiment are plotted in Fig. [5] They show that distributed K-FAC provides\nsignificant speed-up during the early stages of training compared to SGD+BN.\nIn our final experiment we explored how well distributed K-FAC scales as additional parallel cor\nputing resources become available. To do this we trained GoogLeNet with varying mini-batch size\nof {256, 1024, 2048}, and measured per-training-case progress. Ideally, if extra gradient data is be\ning used efficiently, one should expect the per-training-case progress to remain relatively constar\nwith respect to mini-batch size. The results from this experiment are plotted in Fig. (6 and sho\\\nthat distributed K-FAC exhibits something close to this ideal behavior, while SGD+BN rapidly lose\ndata efficiency when moving beyond a mini-batch size of 256. These results suggest that distribute\nK-FAC, more so than the SGD+BN baseline, is capable of speeding up training in proportion to th\namount of parallel computational resources used.\nFigure 5: Optimization performance of distributed K-FAC and SGD training ResNet50 on Ima-\ngeNet. The dashed lines are the training curves and solid lines are the validation curves. bz indicates\nthe size of mini-batches. rbz indicates the size of chunks used to assemble the BN updates. Top\nrow: cross entropy loss and classification error v.s. the number of updates. Bottom row: cross en-\ntropy loss and classification error v.s. wallclock time (in hours). All methods used 8 GPUs, with\ndistributed K-FAC using the 8-th GPU as a dedicated asynchronous stats worker."}, {"section_index": "11", "section_name": "7 DISCUSSION", "section_text": "We have introduced distributed K-FAC, an asynchronous distributed second-order optimization al.\ngorithm which computes Kronecker-factored Fisher approximations and stochastic gradients over\nlarger mini-batches asynchronously and in parallel.\nOur experiments show that the extra overhead introduced by distributed K-FAC is mostly mitigated\nby the use of parallel asynchronous computation, resulting in updates that can be computed in a\nsimilar amount of time to those of distributed SGD, while making much more progress on the ob-\njective function per iteration. We showed that in practice this can lead to speedups of roughly 3.5x\ncompared to standard SGD + Batch Normalization (BN), and 2x compared to SGD + an improved\nversion of BN on large-scale convolutional network training tasks.\nWe also proposed a doubly-factored Kronecker approximation that allows distributed K-FAC to scale\nup to large models with hundreds of millions of parameters, and demonstrated the effectiveness of\nthis approach in experiments.\nFinally, we showed that distributed K-FAC enjoys a favorable scaling property with mini-batct\nsize that is seemingly not shared by SGD+BN. In particular, we showed that per-iteration progres:\ntends to be proportional to the mini-batch size up to a much larger threshold than for SGD+BN. Thi:\nsuggests that it will yield even further reductions in total wall-clock training time when implementec\nin a larger distributed system than the one we considered."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado,\nAndy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heteroge-\nneous distributed systems. arXiv preprint arXiv: 1603.04467, 2016.\nShun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251-276, 1998\nJames Bergstra, Olivier Breuleux, Fr\u00e9d\u00e9ric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins,\nJoseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: A cpu and gpu math compiler in python.\nIn Proc. 9th Python in Science Conf, pages 1\u20147, 2010.\nAntoine Bordes, L\u00e9on Bottou, and Patrick Gallinari. Sgd-qn: Careful quasi-newton stochastic gradient descen\nJournal of Machine Learning Research, 10(Jul):1737\u20141754, 2009.\nRichard H Byrd, SL Hansen, Jorge Nocedal, and Yoram Singer. A stochastic quasi-newton method for large-\nscale optimization. SIAM Journal on Optimization, 26(2):1008\u20141031, 2016.\nMinhyung Cho, Chandra Dhir, and Jachyung Lee. Hessian-free optimization for learning deep multidimen-\nsional recurrent neural networks. In Advances in Neural Information Processing Systems, pages 883-891,\n2015.\nFrank Curtis. A self-correcting variable-metric algorithm for stochastic optimization. In Proceedings of Thi\n33rd International Conference on Machine Learning, pages 632-641. 2016.\nJeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker,\nKe Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in neural information\nprocessing systems, pages 1223-1231, 2012.\nGuillaume Desjardins, Karen Simonyan, Razvan Pascanu, and Koray Kavukcuoglu. Natural neural network\nIn Advances in Neural Information Processing Systems, pages 2071-2079, 2015.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic\noptimization. Journal of Machine Learning Research, 12(Jul):2121\u20142159, 2011.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv\npreprint arXiv: 1512.03385, 2015.\nXi He, Dheevatsa Mudigere, Mikhail Smelyanskiy, and Martin Taka\u00e9. Large scale distributed hessian-free\noptimization for deep neural network. arXiv preprint arXiv: 1606.00511, 2016.\nNitish Shirish Keskar and Albert S Berahas. adaqn: An adaptive quasi-newton algorithm for training rnns.\narXiv preprint arXiv:1511.01169, 2015.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. , University \u00ab\nToronto, 2009.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neura\nnetworks. In Advances in neural information processing systems, pages 1097-1105, 2012.\nNicolas Le Roux, Pierre-Antoine Manzagol, and Yoshua Bengio. Topmoumoute online natural gradient algo-\nrithm. In Advances in neural information processing systems, pages 849-856, 2008.\nJames Martens. Deep learning via Hessian-free optimization. In Proceedings of the 27th International Confer\nence on Machine Learning (ICML), pages 735-742, 2010.\nJames Martens. New insights and perspectives on the natural gradient method. arXiv preprint arXiv: 1412.1193\n2014.\nJames Martens and Ilya Sutskever. Training deep and recurrent networks with Hessian-free optimization. I\nNeural Networks: Tricks of the Trade, pages 479-535. Springer, 2012.\nBoris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journa\non Control and Optimization, 30(4):838-855, 1992.\nDaniel Povey, Xiaohui Zhang, and Sanjeev Khudanpur. Parallel training of DNNs with natural gradient and\nparameter averaging. In /nternational Conference on Learning Representations: Workshop track, 2015.\nVivek Ramamurthy and Nigel Duffy. L-SR1: A novel second order optimization method for deep learning.\nNicol N. Schraudolph. Fast curvature matrix-vector products for second-order gradient descent. Neural Com\nputation, 14(7), 2002.\nTom Heskes. On \u201cnatural\u201d learning and pruning in multilayered perceptrons. Neural Computation, 12(A4):\n881-901, 2000.\nYann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document\nrecognition. Proceedings of the IEEE, 86(11):2278\u20142324, 1998.\nNicol N Schraudolph, Jin Yu, Simon Giinter, et al. A stochastic quasi-newton method for online convex opti\nmization. In AISTATS, volume 7, pages 436-443, 2007.\n<iao Wang, Shigian Ma, and Wei Liu. Stochastic quasi-newton methods for nonconvex stochastic optimizatiot\narXiv preprint arXiv: 1412.1196, 2014.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Er-\nhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprint\narXiv:1409.4842, 2014.\n\nChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the\ninception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.\n\nOriol Vinyals and Daniel Povey. Krylov subspace descent for deep learning. In AJSTATS, pages 1261-1268,\n2012.\n\nwre ot aL\nCrossEntropy\n\nCrossEntropy\n\n0.50,\n\n_ ist.K-FAC KFC bz512\n- ist.K-FAC fast bz512 0.45\n: _ 0.40)\n1.8} E\n&\n16 0.35\n14\n0.30\n1.2)\nLO; 1 3 4 5 6 0.255 + . 5 \u2018 . ]\n0.50 Updates x le+04\n2.4\n2.2 0.45\n2.0]\n0.40\n1.8} re\n&\n16 0.35\n14\n0.30\n1.2)\n10! 0.25,\n[) 13.9 27.8 41.7 55.6 69.4 83.3 97.2 ) 13.9 27.8 417 55.6 69.4 83.3 97.2\nFigure 7: Empirical evaluation of the proposed cheaper Kronecker approximation on GoogLeNet.\nbz indicates the size of the mini-batches. Dashed lines denote training curves and solid lines denote\nvalidation curves. Top row: cross entropy loss and classification error vs the number of updates.\nBottom row: cross entropy loss and classification error vs wallclock time.\nIn a convolution layer, the gradient is the sum of the outer product between the receptive field inpu\nactivation A, and the back-propagated derivatives Ds, at each spatial location t \u20ac T. One canno\nsimply apply the standard Kronecker factored approximation from Martens and Grosse] (2015) t\neach location, sum the results, and then take the inverse, as there is no known efficient algorithm fo\ncomputing the inverse of such a sum.\nIn (2016), a Kronecker-factored approximation for convolutional layers called\nKronecker Factors for Convolution (KFC) was developed. It works by introducing additional sta-\n\ntistical assumptions about how the weight gradients are related across locations. In particular, KFC\nassumes spatial homogeneity, i.e. that all locations have the same statistics, and spatially uncor-\nrelated derivatives, which (essentially) means that gradients from any two different locations are\nstatistically independent. This yields the following approximation:\nElvee{ DW} vec{DW}\"] \u00a9 |T|E[A-Al] @E [Ds:Ds/] .\nIn this section we introduce an arguably simpler Kronecker factored approximation for convolutiona\nlayers that is cheaper to compute. In practice, it appears to be competitive with the original KFC ap.\nproximation in terms of per-iteration progress on the objective, working worse in some experiments\nand better in others, while (often) improving wall-clock time due to its cheaper cost.\ns[vec{ DW} vec{DW}\"]\n\nBE) rete Ds,A; } vert > Paeal\n\nE Jean) (sam) |\n\nE (in BAL] @ B[Dsl) (in BAL] @ Bis)\n\n2\nIt works by approximating the sum of the gradients over spatial locations as the outer product of\nthe averaged receptive field activations over locations E;[A,], and the averaged back-propagated\nderivatives F,/Ds;], multipled by the number of spatial locations |7|. In other words:\nDW} vec{DW}\"]\n\n2\n\n= free Ds,A; } vee{ > Ds,A; } | (16)\n\nteT teT\n\nE (= At \u00ae >) (= At \u00ae Ps | (17)\n\nteT teT\n\nE [G BAL] \u00ae EID\u00bb) (ir BAL] \u00ae EID\u00bb) \u2018| (18)\nUnder the approximation assumption that the second-order statistics of the average activations.\nE;|A;], and the second-order statistics of the average derivatives, E,[Ds;], are uncorrelated, this\nbecomes:\nTPE [ela e4\"] @E [psd zDs|\n5.0, 0.70)\n\u2014 dist.K-FAC bz256 decayKL|\n45 \u2014 dist.K-FAC bz256 decayLR|\n0.65\n4.0\n0.60|\n35\nz\n& 3.0 Boss\nP25\n0.50|\n2.0]\n0.45\n1s\n10) 0.40!\n0 05 1 15 2 25 0 05 1 15 2 25 3\n\nUpdates x 1le+04\n\nUpdates x 1e +04\nFigure 8: Results from the experiment described in Appendix [B] decayKL indicates the proposed\nstep-size selection method and decayLR indicates standard exponential learning rate decay.\nThis approximation is cheaper than the original KFC approximation because it is easier to compute\na single outer product (after averaging over locations) than it is to compute an outer product a\neach location and then average. In the synchronous setting, for the large convolutional networks we\nexperimented with, this trick resulted in a 20-30% decrease in overall wall clock time per iteration\nwith little effect on per-iteration progress.\nB EXPERIMENTAL EVALUATION OF THE STEP-SIZE SELECTION METHOD O}\nSECTION|5]\nTo compare our proposed step size selection from Sec.[5]with the commonly-used exponential learn-\ning rate decay, we performed a simple experiment training GoogLeNet. Both the learning rate and\nthreshold c on the square Fisher norm, is decayed by a factor of 0.96 after every 3200 iterations.\n\nThe results of this experiment are plotted in Fig.|8} and indicate that our method outperforms the\nstandard baseline.\nIn recent years, deep learning libraries have moved towards the computational graph abstractiot\n(Bergstra et al. 2010} Abadi et al.| 2016) to represent neural network computations. In this sectiot\nwe give a high level description of an algorithm that scans a computational graph for parameters fo\nwhich one of the various Kronecker-factored approximations can be applied, locates nodes contain\ning the required information to compute the second-order statistics required by the approximations\nand then constructs a new graph that computes the approximations and uses them to update th\nparameters.\nFor the sake of discussion, we will assume the computation graph is a directed bipartite graph that\nhas a set of operator nodes doing some computation, and some variable nodes that holds inter-\nmediate computational results. The trainable parameters are stored in the memory that is loaded\nor mutated through read/write operator nodes. We also assume that the trainable parameters are\ngrouped layer-wise as a set of weights and biases. Finally, we assume the gradient computation\nfor the trainable parameters is performed by a computation graph (which is usually is generated via\nautomatic differentiation).\nIn analogy to generating the gradient computation graph through automatic differentiation, given an\narbitrary computation graph with a set of the trainable parameters, we would like to use the existing\nnodes in the given graph to automatically generate a new computation graph, a \u201cK-FAC computation\ngraph\u201d, that computes the Kronecker-factored approximate Fisher blocks associated with each group\nof parameters (typically layers in a neural net), and then uses them to update the parameters.\nTo compute the Fisher block for a given layer, we want to find all the nodes holding the gradients o!\nthe trainable parameters in a computation graph. One simple strategy is to traverse the computatior\ngraph from the gradient nodes to their immediate parent nodes.\nA set of parameters has a Kronecker-factored approximation to its Fisher block if its corresponding\ngradient node has a matrix product or convolution operator node as its immediate parent node. For\nthese parameters, the Kronecker factor matrices are the second-order statistics of the inputs to the\nparent operator node of their gradient nodes (typically the activities A and back-propagated deriva-\ntives Ds). For other sets of parameters an exact Fisher block can be computed instead (assuming\nthey have low enough dimension).\nIn a typical neural network, most of the parameters are concentrated in weight matrices, that are\nused for matrix product or convolution operations, for which one of the existing Kronecker-factorec\napproximations applies. Homogeneous coordinates can be used if the weights and biases of the\nsame layer are annotated in the computation graph. The rest of the parameters are often gain anc\nbias vectors for each hidden unit, and it is feasible to compute and invert exact Fisher blocks fo:\nthese.\nA neural network can be also instantiated multiple times in a computational graph (with shared pa-\nrameters) to process different inputs. The gradient of the parameters shared across the instantiations\nare the sum of the individual gradients from each instantiation. Given such computation graph, the\nimmediate parent operator node from the gradient is a summation whose inputs are computed by the\nsame type of operators. Without additional knowledge about the computation graph, one approxi-\nmation is to treat the individual gradient contributions in the summation as statistically independent\nof each other (similarly to how gradient contributions from multiple spatial locations are treated as\nindependent in the KFC approximation (Grosse and Martens}|/2016)). Under this approximation, the\nKronecker factors associated with the gradient can be computed by lumping the statistics associated\nwith each of the gradient contributions together.\nKronecker factors can sometimes be shared by approximate Fisher blocks for two or more parame-\nters. This is the case, for example, when a vector of units serves as inputs to two different weight-\nmatrix multiplication operations. In such cases, the computation of the second-order statistics can\nbe reused, which is what we do in our implementation.\nOur implementation of Distributed K-FAC in TensorFlow applies the above the strategy to auto-\nmatically generate K-FAC computation graphs without requiring the user to modify their existing\nmodel-definition code."}]
Byk-VI9eg
[{"section_index": "0", "section_name": "GENERATIVE MULTI-ADVERSARIAL NETWORKS", "section_text": "Ishan Durugkar*, Ian Gemp*, Sridhar Mahadevan\nCollege of Information and Computer Sciences\n\nUniversity of Massachusetts, Amherst\nAavhavot ATA NINLN TICA\n{idurugkar, imgemp, mahadeva}@cs.umass.edu\nGenerative adversarial networks (GANs) are a framework for producing a gen:\nerative model by way of a two-player minimax game. In this paper, we propose\nthe Generative Multi-Adversarial Network (GMAN), a framework that extend:\nGANSs to multiple discriminators. In previous work, the successful training o!\nGANs requires modifying the minimax objective to accelerate training early on\nIn contrast, GMAN can be reliably trained with the original, untampered objec:\ntive. We explore a number of design perspectives with the discriminator role rang.\ning from formidable adversary to forgiving teacher. Image generation tasks com:\nparing the proposed framework to standard GANs demonstrate GMAN produces\nhigher quality samples in a fraction of the iterations when measured by a pairwis\u00a2\nGAM-type metric."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Generative adversarial networks (Goodfellow et al. (2014)) (GANs) are a framework for producing\na generative model by way of a two-player minimax game. One player, the generator, attempts to\ngenerate realistic data samples by transforming noisy samples, z, drawn from a simple distribution\n(e.g., 2 ~ N(0, 1)) using a transformation function Gg(z) with learned weights, 6. The generator\nreceives feedback as to how realistic its synthetic sample is from another player, the discriminator,\nwhich attempts to discern between synthetic data samples produced by the generator and samples\ndrawn from an actual dataset using a function D,,(x) with learned weights, w.\nThe GAN framework is one of the more recent successes in a line of research on adversarial train-\ning in machine learning (Schmidhuber (1992); Bagnell (2005); Ajakan et al. (2014)) where games\nbetween learners are carefully crafted so that Nash equilibria coincide with some set of desired op-\ntimality criteria. Preliminary work on GANs focused on generating images (e.g., MNIST (LeCun\net al. (1998)), CIFAR (Krizhevsky (2009))), however, GANs have proven useful in a variety of appli-\ncation domains including learning censored representations (Edwards & Storkey (2015)), imitating\nexpert policies (Ho & Ermon (2016)), and domain transfer (Yoo et al. (2016)). Work extending\nGANs to semi-supervised learning (Chen et al. (2016); Mirza & Osindero (2014); Gauthier (2014):\nSpringenberg (2015)), inference (Makhzani et al. (2015); Dumoulin et al. (2016)), feature learning\n(Donahue et al. (2016)), and improved image generation (Im et al. (2016); Denton et al. (2015):\nRadford et al. (2015)) have shown promise as well.\nDespite these successes, GANS are reputably difficult to train. While research is still underway to\nimprove training techniques and heuristics (Salimans et al. (2016)), most approaches have focused\non understanding and generalizing GANs theoretically with the aim of exploring more tractable\nformulations (Zhao et al. (2016); Li et al. (2015); Uehara et al. (2016); Nowozin et al. (2016))."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper, we theoretically and empirically justify generalizing the GAN framework to multiple\ndiscriminators. We review GANs and summarize our extension in Section 2. In Sections 3 and 4,\nwe present our V-discriminator extension to the GAN framework (Generative Multi-Adversarial\nNetworks) with several variants which range the role of the discriminator from formidable adversary\nto forgiving teacher. Section 4.2 explains how this extension makes training with the untampered\nminimax objective tractable. In Section 5, we define an intuitive metric (GMAM) to quantify GMAN\nperformance and evaluate our framework on a variety of image generation tasks. Section 6 conclude\nwith a summary of our contributions and directions for future research.\nContributions\u2014To summarize, our main contributions are: i) a multi-discriminator GAN frame:\nwork, GMAN, that allows training with the original, untampered minimax objective; ii) a generative\nulti-adversarial metric (GMAM) to perform pairwise evaluation of separately trained frameworks\nii) a particular instance of GMAN, GMAN%, that allows the generator to automatically regulate\ntraining and reach higher performance (as measured by GMAM) in a fraction of the training time\nrequired for the standard GAN model.\nThe original formulation of a GAN is a minimax game between a generator, Gg(z) : z > x, and\ndiscriminator, D,,(x) : x \u2014 [0,1],\nnin max V(D,G) = Exwpagea(s) | log(D(x))] + Exxp.(s [loa - D(G(2)))}\nmin V( D*,G)= min {C(G) = \u2014log(4)+2- JSD(Paatallpa) }\nThis perspective of minimizing the distance between the distributions, paara and pg, motivated\nLi et al. (2015) to develop a generative model that matches all moments of pg(x) with paata(x) (a\noptimality) by minimizing maximum mean discrepancy (MMD). Another approach, EBGAN, (Zhac\net al. (2016)) explores a larger class of games (non-zero-sum games) which generalize the generato:\nand discriminator objectives to take real-valued \u201cenergies\u201d as input instead of probabilities. Nowozir\net al. (2016) and then Uehara et al. (2016) extended the JSD perspective on GANs to more genera\ndivercences. cnecifically f-diverocences and then Breoman-diversences resnectivelv.\nIn general, these approaches focus on exploring fundamental reformulations of V(D, G). Similarly,\nour work focuses on a fundamental reformulation, however, our aim is to provide a framework that\naccelerates training of the generator to a more robust state irrespective of the choice of V."}, {"section_index": "3", "section_name": "2.1 GMAN: A MULTI-ADVERSARIAL EXTENSION", "section_text": "We propose introducing multiple discriminators, which brings with it a number of design possibil-\nities. We explore approaches ranging between two extremes: 1) a more discriminating D (better\napproximating maxp V(D,G)) and 2) a D better matched to the generaior s capabilities. Math-\nematically, we reformulate G\u2019s objective as ming max F(V(D1,G),...,V(Dn,G)) for different\nchoices of F' (see Figure 1). Each D; is still expected to independently m maximize its own V (Dj, G)\n(i.e. no cooperation). We sometimes abbreviate V(D;,G) with V; and F(V,,...,Vy) with Fo(V;).\nHere, we consider multi-discriminator variants that attempt to better approximate maxp V(D,G\nproviding a harsher critic to the generator.\nwhere Paata() is the true data distribution and p,(z) is a simple (usually fixed) distribution that is\neasy to draw samples from (e.g., N(0, 1)). We differentiate between the function space of discrim-\ninators, D, and elements of this space, D. Let pg() be the distribution induced by the generator,\nGo(z). We assume D, G to be deep neural networks as is typically the case.\nIn their original work, Goodfellow et al. (2014) proved that given sufficient network capacities\nand an oracle providing the optimal discriminator, D* = arg maxp V (D,G), gradient descent on\npa(x) will recover the desired globally optimal solution, pg(x) = pdata(2), so that the generator\ndistribution exactly matches the data distribution. In practice, they replaced the second term, log(1\u2014\nD(G(z))), with \u2014 log(D(G(z))) to enhance gradient signals at the start of the game; note this is no\nlonger a zero-sum game. Part of their convergence and optimality proof involves using the oracle,\nD*, to reduce the minimax game to a minimization over G only:\nwhere JSD denotes Jensen-Shannon divergence. Minimizing C(G) necessarily minimizes JSD,\nhowever, we rarely know D* and so we instead minimize V(D,G), which is only a lower bound.\nD.}) (D. je( D\nSa) Sen\nFigure 1: (GMAN) The generator trains using feedback aggregated over multiple discriminators. If\nF := max, G trains against the best discriminator. If F := mean, G trains against an ensemble.\nWe explore other alternatives to F in Sections 4.1 & 4.4 that improve on both these options."}, {"section_index": "4", "section_name": "3.1 MAXIMIZING V(D,G)", "section_text": "For a fixed G, maximizing F\u00a2(V;) with F := max and N randomly instantiated copies of our dis\ncriminator is functionally equivalent to optimizing V (e.g., stochastic gradient ascent) with randon\nrestarts in parallel and then presenting maxje{1,..,.v} V(Di, G) as the loss to the generator \u2014a very\npragmatic approach to the difficulties presented by the non-convexity of V caused by the deep net\nRequiring the generator to minimize the max forces G to generate high fidelity samples that mus\nhold up under the scrutiny of all N discriminators, each potentially representing a distinct max.\nIn practice, maxp,<p V(D;,G) is not performed to convergence (or global optimality), so the\nabove problem is oversimplified. Furthermore, introducing N discriminators affects the dynam-\nics of the game which affects the trajectories of the discriminators. This prevents us from claiming\nmax{Vj(t),..., Viv(t)} > max{Vj (t)} Vt even if we initalize D,(0) = Dj{,(0) as it is unlikely that\nD,(t) = D\u2018(t) at some time t after the start of the game."}, {"section_index": "5", "section_name": "3.2 BOOSTING", "section_text": "There are a few differences between taking the max (case 1) and online boosting (case 2). In case 1\nour booster is limited to selecting a single weak discriminator (i.e. a pure strategy), while in case 2\nmany boosting algorithms more generally use linear combinations of the discriminators. Moreover\nin case 2, a booster must make a prediction before receiving a loss function. In case 1, we assume\naccess to the loss function at prediction time, which allows us to compute the max.\nIt is possible to train the weak discriminators using boosting and then ignore the booster\u2019s predictiot\nby instead presenting max{V;}. We explore both variants in our experiments, using the adaptive al\ngorithm proposed in Beygelzimer et al. (2015). Unfortunately, boosting failed to produce promisin;\nresults on the image generation tasks. It is possible that boosting produces too strong an adversary\nfor learning which motivates the next section. Boosting results appear in Appendix A.7."}, {"section_index": "6", "section_name": "t A FORGIVING TEACHER", "section_text": "The previous perspectives focus on improving the discriminator with the goal of presenting a better\napproximation of maxp V(D,G) to the generator. Our next perspective asks the question, \u201cIs\nmaxp V(D,G) too harsh a critic?\u201d"}, {"section_index": "7", "section_name": "4.1 Soft-DISCRIMINATOR", "section_text": "In practice, training against a far superior discriminator can impede the generator\u2019s learning. Thi\nis because the generator is unlikely to generate any samples considered \u201crealistic\u201d by the discrimi-\nnator\u2019s standards, and so the generator will receive uniformly negative feedback. This is problem-\nWe can also consider taking the max over N discriminators as a form of boosting for the discrim-\ninator\u2019s online classification problem (online because G can produce an infinite data stream). The\nboosted discriminator is given a sample x, and must predict whether it came from the generator or\nthe dataset. The booster then makes its prediction using the predictions of the N weaker D;.\natic because the information contained in the gradient derived from negative feedback only dictates\nwhere to drive down p(x), not specifically where to increase pg(x). Furthermore, driving down\npa(x) necessarily increases pg (x) in other regions of V (to maintain te pa(x) = 1) which may or\nmay not contain samples from the true dataset (whack-a-mole dilemma). In contrast, a generator is\nmore likely to see positive feedback against a more lenient discriminator, which may better guide a\ngenerator towards amassing pq(x) in approximately correct regions of 4.\nFor this reason, we explore a variety of functions that allow us to soften the max operator. We\nchoose to focus on soft versions of the three classical Pythagorean means parameterized by \\ where\n= 0 corresponds to the mean and the max is recovered as \\ \u2014 oo:\nN\nAMgosi(V, A) = Ss wiVi\nN\nGMgort(V, A) = \u2014 exp (~ w; log(\u2014V:))\nN\n\nHMgopt(V, A) = (> wv\") -1\n\ni\nwhere w; = e*\u00a5' /Ze* with \\ > 0, V; < 0. Using a softmax also has the well known advantage of\nbeing differentiable (as opposed to subdifferentiable for max). Note that we only require continuity\nto guarantee that computing the softmax is actually equivalent to computing V(D, G) where D is\nsome convex combination of D; (see Appendix A.5)."}, {"section_index": "8", "section_name": "4.2 USING THE ORIGINAL MINIMAX OBJECTIVE", "section_text": "To illustrate the effect the softmax has on training, observe that the component of AM,,5,(V,0\nrelevant to generator training can be rewritten as\ni 1\nvo Br~pa(e) | 108 (l-D; (x))| = ayEe~pa(a) | 1oa(2)|\nwhere z = []}\u2019 (1 \u2014 D,(x)). Note that the generator gradient, | 2ieate) |, is minimized at z = 1 over\nz\u20ac (0, yi. From this form, it is clear that z = 1 if and only if D; = 0i, so G only receives a\nvanishing gradient if all D; agree that the sample is fake; this is especially unlikely for large N. In\nother words, G only needs to fool a single D; to receive constructive feedback. This result allows the\ngenerator to successfully minimize the original generator objective, log(1 \u2014 D). This is in contrast\n\nto the more popular \u2014 log(D) introduced to artificially enhance gradients at the start of training.\nAt the beginning of training, when maxp, V (Dj, G) is likely too harsh a critic for the generator, we\ncan set A closer to zero to use the mean, increasing the odds of providing constructive feedback to\nthe generator. In addition, the discriminators have the added benefit of functioning as an ensemble,\nreducing the variance of the feedback presented to the generator, which is especially important\nwhen the discriminators are far from optimal and are still learning a reasonable decision boundary.\nAs training progresses and the discriminators improve, we can increase \\ to become more critical\nof the generator for more refined training."}, {"section_index": "9", "section_name": "4.3. MAINTAINING MULTIPLE HYPOTHESES", "section_text": "We argue for this ensemble approach on a more fundamental level as well. Here, we draw on\nthe density ratio estimation perspective of GANs (Uehara et al. (2016)). The original GAN proof\nassumes we have access to pdata(x), if only implicitly. In most cases of interest, the discriminator\nonly has access to a finite dataset sampled from paata(); therefore, when computing expectations\nof V(D,G), we only draw samples from our finite dataset. This is equivalent to training a GAN\nwith paata(x) = Paata Which is a distribution consisting of point masses on all the data points in the\ndataset. For the sake of argument, let\u2019s assume we are training a discriminator and generator, each\nN\nAMoopt(V,A) = So wiVi (3)\nN\nGMgopt(V, 4) = \u2014 exp (~ Wi log(\u2014V:)) (4)\nN\n\nHMgopi(V,A) = (Seve) (5)\n\ni\nwith infinite capacity. In this case, the global optimum (pq(x) = Paata(z)) fails to capture any o\nthe interesting structure from pdata(2), the true distribution we are trying to learn. Therefore, it i\nactually critical that we avoid this global optimum.\nFigure 2: Consider a dataset consisting of the nine 1-dimensional samples in black. Their corre-\nsponding probability mass function is given in light gray. After training GMAN, three discrimina-\ntors converge to distinct local optima which implicitly define distributions over the data (red, blue,\nyellow). Each discriminator may specialize in discriminating a region of the data space (placing\nmore diffuse mass in other regions). Averaging over the three discriminators results in the distribu-\ntion in black, which we expect has higher likelihood under reasonable assumptions on the structure\nof the true distribution.\nIn practice, this degenerate result is avoided by employing learners with limited capacity and corrupt-\ning data samples with noise (i.e., dropout), but we might better accomplish this by simultaneously\ntraining a variety of limited capacity discriminators. With this approach, we might obtain a diverse\nset of seemingly tenable hypotheses for the true pdata(x). Averaging over these multiple locally\noptimal discriminators increases the entropy of Paata(x) by diffusing the probability mass over the\ndata space (see Figure 2 for an example)."}, {"section_index": "10", "section_name": "4.4 AUTOMATING REGULATION", "section_text": "The problem of keeping the discriminator and generator in balance has been widely recognized in\nprevious work with GANs. Issues with unstable dynamics, oscillatory behavior, and generator col-\nlapse are not uncommon. In addition, the discriminator is often times able to achieve a high degree of\nclassification accuracy (producing a single scalar) before the generator has made sufficient progress\non the arguably more difficult generative task (producing a high dimensional sample). Salimans\net al. (2016) suggested label smoothing to reduce the vulnerability of the generator to a relatively\nsuperior discriminator. Here, we explore an approach that enables the generator to automatically\ntemper the performance of the discriminator when necessary, but still encourages the generator to\nchallenge itself against more accurate adversaries. ro) we augment the generator objective:\n\nmin Fo(Vi) \u2014 f() (7)\nFa( A)\ngain Fo(Vi) \u2014 FO)\nwhere f(A) is monotonically increasing in \\ which appears in the softmax equations, (3)\u2014(5). I\nexperiments, we simply set f(A) = cA with c a constant (e.g., 0.001). The generator is incentivizec\nto increase X to reduce its objective at the expense of competing against the best available adversary\nD* (see Appendix A.6)."}, {"section_index": "11", "section_name": "5 EVALUATION", "section_text": "Evaluating GANs is still an open problem. In their original work, Goodfellow et al. (2014) report\nlog likelihood estimates from Gaussian Parzen windows, which they admit, has high variance and\nis known not to perform well in high dimensions. Theis et al. (2016) recommend avoiding Parzen\nwindows and argue that generative models should be evaluated with respect to their intended appli-\ncation. Salimans et al. (2016) suggest an Inception score, however, it assumes labels exist for the\ndataset. Recently, Im et al. (2016) introduced the Generative Adversarial Metric (GAM) for mak-\ning pairwise comparisons between independently trained GAN models. The core idea behind their\napproach is given two generator, discriminator pairs (G, D1) and (G2, D2), we should be able to\nlearn their relative performance by judging each generator under the opponent\u2019s discriminator.\nIn GMAN, the opponent may have multiple discriminators, which makes it unclear how to perform\nthe swaps needed for GAM. We introduce a variant of GAM, the generative multi-adversarial metric\n(GMAM), that is amenable to training with multiple discriminators,\nFG, V\") FeV)\n\nGMAM = log\n08 (Gs F\u00ae. (VP)\nwhere a and 0 refer to the two GMAN variants (see Section 3 for notation F\u00a2(V;)). The idea here is\nsimilar. If Gz performs better than G, with respect to both D; and D2, then GMAM>0 (remember\nV <0 always). If G; performs better in both cases, GUAM <0, otherwise, the result is indeterminate.\nWe evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIS\u2019\n(LeCun et al. (1998)), CIFAR-10 (Krizhevsky (2009)) and CelebA (Liu et al. (2015)). We focus o1\nrates of convergence to steady state along with quality of the steady state generator according to th\nGMAM metric. To summarize, loosely in order of increasing discriminator leniency, we compare\nAll generator and discriminator models are deep (de)convolutional networks (Radford et al. (2015)),\nand aside from the boosted variants, all are trained with Adam (Kingma & Ba (2014)) and batch\nnormalization (loffe & Szegedy (2015)). Discriminators convert the real-valued outputs of their\nnetworks to probabilities with squashed-sigmoids to prevent saturating logarithms in the minimax\nobjective (\u20ac + nes ). See Appendix A.8 for further details. We test GMAN systems with N =\n{2.5} discriminators. We maintain discriminator diversity by varying dropout and network depth."}, {"section_index": "12", "section_name": "5.2.1 MNIST", "section_text": "Figure 3 reveals that increasing the number of discriminators reduces the number of iterations to\nsteady-state by 2x on MNIST; increasing N (the size of the discriminator ensemble) also has the\nadded benefit of reducing the variance the minimax objective over runs. Figure 4 displays the vari-\nance of the same objective over a sliding time window, reaffirming GMAN\u2019s acceleration to steady-\nstate. Figure 5 corroborates this conclusion with recognizable digits appearing approximately an\nepoch before the single discriminator run: digits at steady-state appear slightly sharper as well.\nOur GMAM metric (see Table 1) agrees with the relative quality of images in Figure 5 with GMAN*\nachieving the best overall performance. Figure 6 reveals GMAN*\u2019s attempt to regulate the difficulty\nTable 1: Pairwise GMAM metric means with stdev for select models on MNIST. For each column, 4\npositive GMAM indicates better performance relative to the row opponent; negative implies worse.\nScores are obtained by summing each variant\u2019s column.\nF-boost: A single AdaBoost.OL-boosted discriminator (see Appendix A.7).\n\nP-boost: Dj is trained according to AdaBoost.OL. A max over the weak learner losses\npresented to the generator instead of the boosted prediction (see Appendix A.7).\n\nGMAN-max: max{V;} is presented to the generator.\n\nGAN: Standard GAN with a single discriminator (see Appendix A.2).\nmod-GAN: GAN with modified objective (generator minimizes \u2014 log(D(G(z))).\nGMAN-,: GMAN with F :=arithmetic softmax with parameter \\.\n\nGMAN*: The arithmetic softmax is controlled by the generator through A.\nScore | Variant | GMAN* | GMAN-O | GMAN-max | _mod-GAN\nt 0.127 GMAN* - \u20140.020 + 0.009 | \u20140.028 + 0.019 | \u20140.089 + 0.036\n5 0.007 GMAN-O 0.020 + 0.009 - \u20140.013 + 0.015 | \u20140.018 + 0.027\n3 | \u20140.034 |) GMAN-max | 0.028 + 0.019 0.013 + 0.015 - \u20140.011 + 0.024\nms) \u20140.122 mod-GAN 0.089 + 0.036 0.018 4\n\nt 0.027 0.011 + 0.024\n\u2018N=L original\nmodified\n\n\u2014 N=5\n\n1000 2000 3000 4000 5000 600(\nIteration #\n\u2018N=L original\n\u2014 N-=1modified\n2\n\n\u2014 N=5\n1000 2000 3000 4000 5000 6000\nFigure 3: Generator objective, F', averaged\nover 5 training runs on MNIST. Increas-\ning the number of discriminators accelerates\nconvergence of F\u2019 to steady state (solid line)\nand reduces its variance, o? (filled shadow\n+1o). Figure 4 provides alternative evidence\nof GMAN*\u2019s accelerated convergence.\n4\nFRR\n\nBEaoo\nER blel\nSoSss\nFigure 5: Comparison of image quality across epochs for N = {1, 2,5} using GMAN-O on MNIST\nof the game to accelerate learning. Figure 7 displays the GMAM scores comparing fixed \u2019s to the\nvariable controlled by GMAN*.\nCumulative STD of F(V(iD,G))\n\n10\u00b0\n\nN=1 original\nmodified\n\n0\n\n1000 2000 3000 4000 5000 600\n\nIteration #\nFigure 4: Stdev, a, of the generator objec-\ntive over a sliding window of 500 iterations.\nLower values indicate a more steady-state.\nGMAN* with N = 5 achieves steady-state\nat 2x speed of GAN (N = 1). Note Fig-\nure 3\u2019s filled shadows reveal stdev of F\u2019 over\nruns, while this plot shows stdev over time.\nNa |=? |=\n\nx = =0.008 | =0.019\nt 0.028 A =0.009 | 0.010\noO\n\n2 _ 0.008 - \u20140.008\n8 0.001 A=1 0.009 0.010\n\n0.019 0.008 -\n0.025 r 0 0.010 0.010\n\n2000 4000 6000 8000 1000012000\nIteration #\n\nGMAM fea raarr ._.a\nFigure 6: GMAN* regulates difficulty of the Figure 7: Pairwise AMGMAND for GMAN-A and\ngame by adjusting J. Initially, G reduces \\ to GMAN* (\\*) over 5 runs on MNIST.\n\nease learning and then gradually increases\n\nfor a more challenging learning environment.\nWe see similar accelerated convergence behavior for the CelebA dataset in Figure 8\nFigure 8: Image quality improvement across number of generators at same number of iterations for\nGMAN-O0 on CelebA.\nFigure 9 displays images generated by GMAN-O0 on CIFAR-10. See Appendix A.3 for more results\nFigure 9: Images generated by GMAN-O on the CIFAR-10 dataset"}, {"section_index": "13", "section_name": "6 CONCLUSION", "section_text": "We introduced multiple discriminators into the GAN framework and explored discriminator roles\nranging from a formidable adversary to a forgiving teacher. Allowing the generator to automatically\ntune its learning schedule (GMAN*) outperformed GANs with a single discriminator on MNIST. In\ngeneral, GMAN variants achieved faster convergence to a higher quality steady state on a variety of\ntasks as measured by a GAM-type metric (GMAM). In addition, GMAN makes using the original\nGAN objective possible by increasing the odds of the generator receiving constructive feedback.\nIn future work, we will look at more sophisticated mechanisms for letting the generator control\nthe game as well as other ways to ensure diversity among the discriminators. Introducing multiple\ngenerators is conceptually an obvious next step, however, we expect difficulties to arise from more\ncomplex game dynamics. For this reason, game theory and game design will likely be important."}, {"section_index": "14", "section_name": "ACKNOWLEDGMENTS", "section_text": "We acknowledge helpful conversations with Stefan Dernbach, Archan Ray, Luke Vilnis, Ben Turtel\nStephen Giguere, Rajarshi Das, and Subhransu Maji. We also thank NVIDIA for donating a K4(\nGPU. This material is based upon work supported by the National Science Foundation under Gran\nNos. IIS-1564032. Any opinions, findings, and conclusions or recommendations expressed in thi:\nmaterial are those of the authors and do not necessarily reflect the views of the NSF.\na ee\nReEEeeAdsamahie yas -\u2014~\n3 APaS em dd age as ae-\u2014\ndAHe#2/oBkGe emsPaaE-\u2014\nSR RE Se 2202 nhaanae-\u2014\nAnAPo DOANE cE ohka-\u2014\n\niscriminat 2 Discriminators\nWe also found that GMAN is robust to mode collapse. We believe this is because the generator\nmust appease a diverse set of discriminators in each minibatch. Emitting a single sample will score\nwell for one discriminator at the expense of the rest of the discriminators. Current solutions (e.g.,\nminibatch discrimination) are quadratic in batch size. GMAN, however, is linear in batch size."}, {"section_index": "15", "section_name": "BIBLIOGRAPHY", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S\nCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine\nlearning on heterogeneous distributed systems. arXiv preprint arXiv: 1603.04467, 2016.\nHana Ajakan, Pascal Germain, Hugo Larochelle, Frangois Laviolette, and Mario Marchand.\nDomain-adversarial neural networks. arXiv preprint arXiv: 1412.4446, 2014.\nJeff Donahue, Philipp Krahenbiihl, and Trevor Darrell. Adversarial feature learning. arXiv preprin\narXiv: 1605.09782, 2016.\nHarrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprin\narXiv:1511.05897, 2015.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,\nAaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-\nmation Processing Systems, pp. 2672-2680, 2014.\nJonathan Ho and Stefano Ermon. Generative adversarial imitation learning. arXiv preprin\narXiv: 1606.03476, 2016.\nDaniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating image\nwith recurrent adversarial networks. arXiv preprint arXiv: 1602.05110, 2016.\nSergey loffe and Christian Szegedy. Batch normalization: Accelerating deep network training by\nreducing internal covariate shift. arXiv preprint arXiv: 1502.03167, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin\narXiv: 1412.6980, 2014.\nAlex Krizhevsky. Learning multiple layers of features from tiny images. Master\u2019s Thesis, 2009.\nYann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits\n1998.\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild.\nIn Proceedings of International Conference on Computer Vision (ICCV), December 2015.\nSebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural sampler\nusing variational divergence minimization. arXiv preprint arXiv: 1606.00709, 2016.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deer\nconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nSiamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, and Barnabas Poczos\nEnabling dark energy science with deep generative models of galaxy images. arXiv preprin\narXiv: 1609.05796, 2016.\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen\nImproved techniques for training gans. arXiv preprint arXiv: 1606.03498, 2016.\nJiirgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation,\n4(6):863-879, 1992.\nLucas Theis, Adron van den Oord, and Matthias Bethge. A note on the evaluation of generative\nmodels. arXiv preprint arXiv:1511.01844v3, 2016.\nJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network\narXiv preprint arXiv: 1609.03126, 2016.\nJost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative\nadversarial networks. arXiv preprint arXiv: 1511.06390, 2015."}, {"section_index": "16", "section_name": "A APPENDIX", "section_text": "See Figures 10, 11, 12, and 13.\n2000 4000 6000 8000 100001200\nIteration #\nFigure 10: Generator objective, F\u2019, averaged\n\nover 5 training runs on\n\nCelebA. Increasing\n\nN (# of D) accelerates convergence of F' to\nsteady state (solid line) and reduces its vari-\n\nance, o? (filled shadow J\n\nt1o). Figure 11 pro-\n\nvides alternative evidence of GMAN-0\u2019s ac-\n\ncelerated convergence.\n\u2014 N-=1 Original\n\u2014 N=1 Modified\n\u2014 N=2,A=0\n\nN=2,r=1\n\n5000 10000 15000\n\nIteration #\n\n20000 25000 3000\nFigure 12: Generator objective, F', averaged\nover 5 training runs on CIFAR-10. Increas-\n\ning N (# of D) accelerates\nF to steady state (solid line)\nvariance, a? (filled shadow 4\n\nconvergence of\nand reduces its\ntlo). Figure 13\n\nprovides alternative evidence of GMAN-0\u2019s\n\naccelerated convergence."}, {"section_index": "17", "section_name": "A.2 ADDITIONAL GMAM TABLES", "section_text": "See Figures 14 and 15.\nFigure 11: Stdev, 7, of the generator objec-\ntive over a sliding window of 500 iterations.\nLower values indicate a more steady-state.\nGMAN-0 with NV = 5 achieves steady-state\nat 2x speed of GAN (N = 1). Note Fig-\nure 10\u2019s filled shadows reveal stdev of F over\nruns, while this plot shows stdev over time.\nCumulative STD of AVD,G))\n\n10\u00b0\n\n3\n0\n\nN=1 Original\n1 Modified\n2,A=0\n\n5000\n\n10000\n\n15000\nIteration #\n\n20000\n\n25000\n\n3000(\nFigure 13: Stdev, a, of the generator objec-\ntive over a sliding window of 500 iterations.\nLower values indicate a more steady-state.\nGMAN-0 with N = 5 achieves steady-state\nat 2x speed of GAN (N = 1). Note Fig-\nure 12\u2019s filled shadows reveal stdev of F over\nruns, while this plot shows stdev over time.\nSee Tables 2, 3, 4, 5, 6. Increasing the number of discriminators from 2 to 5 on CIFAR-10 signif-\nicantly improves scores over the standard GAN both in terms of the GMAM metric and Inception\nscores.\nTable 2: Pairwise GMAM metric means for select models on MNIST. For each column, a positive\nGMAM indicates better performance relative to the row opponent; negative implies worse. Scores\nare obtained by summing each column.\n| Score | Variant | GMAN-0 | GMAN-I | GMAN* | mod-GAN\nt 0.172 GMAN-O - \u20140.022 \u20140.062 \u20140.088\n5] 0.050 GMAN-1 0.022 - 0.006 \u20140.078\n=| \u20140.055 | GMAN* 0.062 \u20140.006 - \u20140.001\n|) \u20140.167 | mod-GAN 0.088\n\n0.078\n\n0.001\nTable 3: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positive\nGMAM indicates better performance relative to the row opponent; negative implies worse. Scores\nare obtained by summing each column. GMAN variants were trained with two discriminators.\n| GMAN-0 | GMAN-1__|_mod-\n\nGAN | _GMAN*\n\nScore | 5.878 4\n\nE0.193 | 5.765 4\n\nE 0.168 | 5.7384\n\n\u00a3 0.176 | 5.539 4\n\n\u00a3 0.099\nTable 4: Inception score means with standard deviations for select models on CIFAR-10. Higher\nscores are better. GMAN variants were trained with two discriminators.\n| Score | Variant _| GMAN-0 | GMAN* | GMAN-1 | mod-GAN |\nt 0.180 GMAN-O - \u20140.008 \u20140.041 \u20140.132\n5 | 0.122 GMAN* 0.008 - \u20140.038 \u20140.092\n3 | 0.010 GMAN-1 0.041 0.038 - \u20140.089\ns) \u20140.313 | mod-GAN 0.132 0.092 0.089 -\n| Score | Variant | GMAN-0 | GMAN* | GMAN-1 | mod-GAN\nt 0.180 GMAN-O - \u20140.008 \u20140.041 \u20140.132\n5 | 0.122 GMAN* 0.008 - \u20140.038 \u20140.092\n3 | 0.010 GMAN-1 0.041 0.038 - \u20140.089\naa]\n\n\u20140.313 | mod-GAN 0.132 0.092 0.089 -\n| GMAN-1 | GMAN-0 | GMAN* | _mod-\n\nGAN\n\nScore | 6.001 4\n\nE 0.194 | 5.957 4\n\n\u00a3 0.135 | 5.955 4\n\nF 0.153 | 5.7384\n\n\u00a3 0.176\nTable 6: Inception score means with standard deviations for select models on CIFAR-10. Higher\nscores are better. GMAN variants were trained with five discriminators.\nFigure 14: Sample of pictures generated on CelebA cropped dataset.\n| Score | Variant | GMAN* | GMAN-I | GAN_ | GMAN-0 | GMAN-max | mod-GAN\n\n0.184 GMAN* - \u20140.007 | \u20140.040 | \u20140.020 \u20140.028 \u20140.089\n0.067 GMAN-1 0.007 - \u20140.008 | \u20140.008 \u20140.021 \u20140.037\ntT} 0.030 GAN 0.040 0.008 - 0.002 \u20140.018 \u20140.058\n3 0.005 GMAN-O 0.020 0.008 0.002 - \u20140.013 \u20140.018\n| \u20140.091 | GMAN-max 0.028 0.021 0.018 0.013 \u20140.011\n\n\u20140.213 | mod-GAN 0.089 0.037 0.058 0.018 0.011 -\nScore | Variant | GMAN-0 | GMAN-1 | GMAN* | mod-GAN |\nt 0.172 GMAN-O - \u20140.022 \u20140.062 \u20140.088\n5] 0.050 GMAN-1 0.022 - 0.006 \u20140.078\n=| \u20140.055 | GMAN* 0.062 \u20140.006 \u20140.001\nfaa}\n\n\u20140.167 | mod-GAN 0.088 0.078 0.001 -\nTable 5: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positive\nGMAM indicates better performance relative to the row opponent; negative implies worse. Scores\nare obtained by summing each column. GMAN variants were trained with five discriminators.\nPet TTT a ae\nBERRA ees\nHERR RASERa\nahs easa\naf S4eSeeen\n| Pe |\nA RSE\nbata eee\nERX aeckae\nG.-HEa-Baee\n\nGenerated Images\n\nee TF te a\nSeka\"\naa. PRRERO?\nSEEGER EYERE\nSAREE\nfhe SVeLeTe\nBosak Seba\nEe Da] pe Pie |\nSE ZEe2ER5 mu\nEAC he 4ane\n\nReal Images\nFigure 15: Sample of pictures generated by GMAN-O on CIFAR dataset.\nA GAN framework with two discriminators appeared in Yoo et al. (2016), however, it is applica\nble only in a semi-supervised case where a label can be assigned to subsets of the dataset (e.g\nX = {X, = Domain 1, %, = Domain 2,...}). In contrast, our framework applies to an unsu\npervised scenario where an obvious partition of the dataset is unknown. Furthermore, extendin,\nGMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminator\nper domain, therefore our line of research is strictly orthogonal to that of their multi-domain dis\ncriminator approach. Also, note that assigning a discriminator to each domain is akin to prescribin;\nanew discriminator to each value of a conditional variable in conditional GANs (Mirza & Osinder\n(2014)). In this case, we interpret GMAN as introducing multiple conditional discriminators and no\na discriminator for each of the possibly exponentially many conditional labels.\nIn Section 4.4, we describe an approach to customize adversarial training to better suit the devel\nopment of the generator. An approach with similar conceptual underpinnings was described 11\nRavanbakhsh et al. (2016), however, similar to the above, it is only admissible in a semi-supervisec\nscenario whereas our applies to the unsupervised case."}, {"section_index": "18", "section_name": "A.5 Softmax REPRESENTABILITY", "section_text": "Let softmax(V;) = Ve [miny,, maxy,]. Also let a = arg min; V;, b = arg max; V;, and V(t) =\nV((1 \u2014 t)Da + tDy) so that V(0) = V, and V(1) = VY. The softmax and minimax objective\nV(D;,G) are both continuous in their inputs, so by the intermediate value theorem, we have that\n\n3t \u20ac [0,1] st. V(\u00e9) = V, which implies 3D \u20ac D s.t. V(D,G) = V. This result implies that\nthe softmax (and any other continuous substitute) can be interpreted as returning V(D, G) for some\nD selected by computing an another, unknown function over the space of the discriminators. This\nresult holds even if D is not representable by the architecture chosen for D\u2019s neural network."}, {"section_index": "19", "section_name": "A.6 UNCONSTRAINED OPTIMIZATION", "section_text": "To convert GMAN* minimax formulation to an unconstrained minimax formulation, we introduce\nan auxiliary variable, A, define \\(A) = log(1 + e4), and let the generator minimize over A \u20ac R."}, {"section_index": "20", "section_name": "A.7 BOOSTING WITH AdaBoost.OL", "section_text": "Figure 16: Sample of pictures generated across 4 independent runs on MNIST with F-boost (similar\nresults with P-boost)."}, {"section_index": "21", "section_name": "A.8 EXPERIMENTAL SETUP", "section_text": "All experiments were conducted using an architecture similar to DCGAN (Radford et al. (2015))\nWe use convolutional transpose layers (Zeiler et al. (2010)) for G and strided convolutions for L\nexcept for the input of G and the last layer of D. We use the single step gradient method as it\n(Nowozin et al. (2016)), and batch normalization (Ioffe & Szegedy (2015)) was used in each o\nthe generator layers. The different discriminators were trained with varying dropout rates fron\n(0.3, 0.7]. Variations in the discriminators were effected in two ways. We varied the architecture by\nvarying the number of filters in the discriminator layers (reduced by factors of 2, 4 and so on), a\nwell as varying dropout rates. Secondly we also decorrelated the samples that the disriminators wer\ntraining on by splitting the minibatch across the discriminators. The code was written in Tensorflov\n(Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs. Code to reproduce experiments and plot\nis at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are:\nAdaBoost.OL (Beygelzimer et al. (2015)) does not require knowledge of the weak learner\u2019s slight\nedge over random guessing (P(correct label) = 0.5 + 7 \u20ac (0,0.5]), and in fact, allows 7 < 0. This\nis crucial because our weak learners are deep nets with unknown, possibly negative, ~y\u2019s.\nGenerator latent variables z ~ U (\u20141, yi\u201d\nGenerator convolution transpose layers: (4, 4, 128) , (8, 8, 64) , (16, 16, 32) , (32, 32, 1)\nBase Discriminator architecture: (32, 32,1) , (16, 16, 32) , (8, 8, 64) , (4,4, 128).\n\nVariants have either convolution 3 (4,4,128) removed or all the filter size:\nare divided by 2 or 4. That is, (32,32,1), (16,16, 16) , (8,8,32),(4,4,64) o\n(32, 32, 1) , (16, 16, 8) , (8, 8, 16) , (4, 4, 32).\n\nReLu activations for all the hidden units. Tanh activation at the output units of the generatot\nSigmoid at the output of the Discriminator.\n\nTraining was performed with Adam (Kingma & Ba (2014)) (Ir = 2 x 10-4, 6; = 0.5).\nMNIST was trained for 20 epochs with a minibatch of size 100.\nCelebA and CIFAR were trained over 24000 iterations with a minibatch of size 100."}]
BJluGHcee
[{"section_index": "0", "section_name": "TENSORIAL MIXTURE MODELS", "section_text": "Or Sharir, Ronen Tamari, Nadav Cohen & Amnon Shashuz\nfor . Sharir, ronent, cohennadav, shashuat@cs -huji-.ac.il\nWe introduce a generative model, we call Tensorial Mixture Models (TMMs)\nbased on mixtures of basic component distributions over local structures (e.g.\npatches in an image) where the dependencies between the local-structures are rep-\nresented by a \u2019priors tensor\u201d holding the prior probabilities of assigning a compo-\nnent distribution to each local-structure.\nIn their general form, TMMs are intractable as the priors tensor is typically of\nexponential size. However, when the priors tensor is decomposed it gives rise\nto an arithmetic circuit which in turn transforms the TMM into a Convolutional\nArithmetic Circuit (ConvAC). A ConvAC corresponds to a shallow (single hidden\nlayer) network when the priors tensor is decomposed by a CP (sum of rank-1)\napproach and corresponds to a deep network when the decomposition follows the\nHierarchical Tucker (HT) model."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Generative models have played a crucial part in the early development of the field of Machin\nLearning. However, in recent years they were mostly cast aside in favor of discriminative models\nlead by the rise of ConvNets (2015), which were found to perform equally well o\nbetter than classical generative counter-parts on almost any task. Despite the increased interest ii\nunsupervised learning, many of the recent studies on generative models choose to focus solely o1\nthe generation capabilities of these models (Goodfellow et al.||2014} |Gregor et al.||2015}|van dei\nOord et al. feat et al.|/2016 | Tran et al.||2016} {Chen et al.|/2016}|Kingma et al.}/2016}/Kir\nand Bengio| . There is much less emphasis on Dorm Sem SDS models to solve actud\ntasks. e.g. sem supervised fearing { cinema eta)\nvan den Oot etal, POTG| Zoran: and Wess ae\n2015} [Theis and Bethge} |2015) or unsupervised feature representation Rea Tarsr\n\n(2011). Nevertheless, work on generative models for solving actual problems are re yet t\nshow a meaningful advantage over competing discriminative models.\nOn the most fundamental level, the difference between a generative model and a discriminative on\nis simply the difference between learning P(X, Y) and learning P(Y|X), respectively. While i\nis always possible to infer P(Y|X) given P(X, Y), it might not be immediately apparent why th\ngenerative objective is preferred over the discriminative one. In{Ng and Jordan) (2002), this ques\ntion was studied w.r.t. the sample complexity, proving that under some cases it can be significantl;\nlesser in favor of the generative classifier. However, their analysis was limited only to specific pair:\nof discriminative and generative classifiers, and they did not present a general case where the th\ngenerative method is undeniably preferred. We wish to highlight one such case, where learnin:"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "he ConvAC representation of a TMM possesses several attractive properties\n\u2018irst, the inference is tractable and is implemented by a forward pass throug!\n| deep network. Second, the architectural design of the model follows the deey\n\\etworks community design, i.e., the structure of TMMs is determined by jus\nwo easily understood factors: size of pooling windows and number of channels\n\u2018inally, we demonstrate the effectiveness of our model when tackling the problen\nyf classification with missing data, leveraging TMMs unique ability of tractabl\nnarginalization which leads to optimal classifiers regardless of the missingnes\nlistribution.\nP(X, Y) is provenly better regardless of the models in question, by examining the problem of clas\nsification with missing data. Despite the artificially well-behave nature of the typical classificatio\nbenchmarks presented in current publications, real-world data is usually riddled with noise and mis:\ning values \u2014 instead of observing X we only have a partial observation X \u2014 a situation that tenc\nto be ignored in modern research. Discriminative models have no natural mechanisms to hand!\nmissing data and instead must rely on data imputation, i.e. filling missing data by a preprocessin\nstep prior to prediction. Unlike the discriminative approaches, generative models are naturally fitte\nto handle missing data by simply marginalizing over the unknown values in P(X, Y), from whic\nwe can attain P(Y|X ) by an application of Bayes Rule. Moreover, under mild assumptions whic\napply to many real-world settings, this method is proven to be optimal regardless of the process b\nwhich values become missing (see sec.|5]for a more detailed discussion).\nWhile almost all generative models can represent P(X, Y), only few can actually infer its exac\nvalue efficiently. Models which possess this property are said to have tractable inference. Man\nstudies specifically address the hard problem of learning generative models that do not have thi\nproperty. Notable amongst those are works based on Variational Inference (Kingma and Welling\n2014} |Kingma et al. 2014} Blei et al.| (2003} [Wang and Grimson 2007} Makhzani et al.| |2015\nKingma et al.||2016), which only provide approximated inference, and ones based on Generativ\nAdversarial Networks (Goodfellow et al. 2014} Radford et al. 2016} Springenberg 2016} [Che\nlet al.| 2016} Salimans et al] 2016} |Makhzani et al. 2015), which completely circumvent the infer\nence problem by restructuring the learning problem as a two-player game of discriminative objec\ntives \u2014 hoth of these annroaches are incapable of tractable inference.\nThere are several advantages to models with tractable inference (e.g. they could be simpler to\ntrain), and as we have shown above, this property is also a requirement for proper handling of\nmissing data in the form of marginalization. In practice, to marginalize over P(X,Y) means to\nperform integration on it, thus, even if it is tractable to compute P(X,Y), it still might not be\ntractable to compute every possible marginalization. Models which are capable of this are said to\nhave tractable marginalization. Mixture Models (e.g. Gaussian Mixture Models) are the classical\nexample of a generative model with tractable inference, as well as tractable marginalization. Though\nthey are simple to understand, easy to train and even known to be universal \u2014 can approximate any\ndistribution given sufficient capacity \u2014 they do not scale well to high-dimensional data. The Gaussian\nMixture Model is an example of a shallow model \u2014 containing just a single latent variable \u2014 with\nlimited expressive efficiency. More generally, Graphical Models are deep and exponentially more\nexpressive, capable of representing intricate relations between many latent variables. While not\nall kinds of Graphical Models are tractable, many are, e.g. Latent Tree Models 2004}\nand Sum-Product Networks (Poon and Domingos||2011). The main issue with\ngeneric graphical models is that by virtue of being too general they lack the inductive bias needed\nto efficiently model unstructured data, e.g. images or text. Despite the success of structure learning\nalgorithms (Huang et al. 2015} Gens and Domingos} 2013} Adel et al. 2015) on structured datasets,\nsuch as discovering a hierarchy among diseases in patients health records, there are no similar results\non unstructured datasets. Indeed some recent works on the subject have failed to solve even simple\nhandwritten digit classification tasks (Adel et al.|/2015). Thus deploying graphical models on such\ncases requires experts to manually design the model. Other attempts which harness neural networks\n\nblocks (Dinh et al.|/2014}/2016) offer tractable inference, but not tractable marginalization.\nTo summarize, most generative models do not have tractable inference, and of the few models which\ndo, they all possess one or more of the following shortcomings: (i) they do not possess the expressive\ncapacity to model high-dimensional data (e.g. images), (ii) they require explicitly designing all the\ndependencies of the data, or (iii) they do not have tractable marginalization.\nWe present in this paper a family of generative models we call Tensorial Mixture Models (TMMs),\nwhich aim to address the above shortcomings of alternative models. Under TMMs, we assume that\nthe data generated by our model is composed of a sequence of local-structures (e.g. patches in an\nimage), where each local-structure is generated from a small set of simple component distributions\n(e.g. Gaussian), and the dependencies between the local-structures are represented by a prior tensor\nholding the prior probabilities of assigning a component distribution to each local-structure. In their\ngeneral form, TMMs are intractable as the prior tensor is typically of exponential size. However, by\ndecomposing the prior tensor, inference of TMMs becomes realizable by Convolutional Arithmetic\n\nCircuits (ConvACs) \u2014 a recently proposed (Cohen et al.|/2016a) ConvNet architecture based on two\ninput\ncoordinates\nby indicators\n\nhidden layer 0\n1x1 conv product\npooling\n\u00b0\n2.\n\nTo\n\n0\n\nhidden layer L-1\n\n1x1 conv product dense\nFigure 1: The decoding algorithm of an arbitrary tensor decomposition represented by a ConvAC.\noperations, weighted sum and product pooling \u2014 which enables both tractable inference as well as\ntractable marginalization. While Graphical Models are typically hard to design, ConvACs follow\nthe same design conventions of modern ConvNets, which reduces the task of designing a model to\nsimply choosing the number of channels at each layer, and size of pooling windows. ConvACs were\nalso the subject of several theoretical studies on its expressive capacity (Cohen et al. 201 6a} Cohen\nand comparing them to ConvNets (Cohen and Shashuaj|2016a), showing they\nare especially suitable for high-dimensional natural data (images, audio, etc.) with a non-negligible\nadvantage over standard ConvNets. Sum-Product Networks are another kind of Graphical Model\n\nrealizable by Arithmetic Circuits, but they do not posses the same theoretical guarantees, nor do\nthey provide a simple method to design efficient and expressive models.\nWe begin by establishing the minimal background in the field of tensor analysis required for fol-\nlowing our work (see app. [A] for a more detailed review of the subject). A tensor is best thought of\nas a multi-dimensional array Aq,,..,\u00a2, \u20ac R, where Vi \u20ac [N],d; \u20ac [Mj] and N is referred to as\nthe order of the tensor. For our purposes we typically assume that M,; = ... = My = M, and\ndenote it as A \u20ac (R\u2122)\u00ae%. It is immediately apparent that performing operations with tensors, or\nsimply storing them, quickly becomes intractable due to their exponential size of MY. That is one\nof the primary motivations behind tensor decomposition, which can be seen as a generalization of\nlow-rank matrix factorization.\nThe relationship between tensor decomposition and networks arises from the simple observation,\nthat through decomposition one can tradeoff storage complexity with computation, where the type\nof computation consists of sums and products. Specifically, the decompositions could be described\nby a compact representation coupled with a decoding algorithm of polynomial complexity to retrieve\nthe entries of the tensor. Most tensor decompositions have a decoding algorithm representable via\ncomputation graphs of products and weighted sums, also known as Arithmetic Circuits\nor Sum-Product Networks Poon and Domingos] 2011}. More specifically, these\ncircuits take as input NV indicator vectors 6),..., 6, representing the coordinates (d1,...,dy),\nwhere 6; = 1,;=a,), and output the value of Ag, .....ay, Where the weights of these circuits form the\ncompact representation of tensors.\nApplying this perspective to two of the most common decomposition formats, CANDE-\nCOMP/PARFAC (CP) and Hierarchical Tucker (HT), give rise to a shared framework for repre-\nsenting their decoding circuits by convolutional networks as illustrated in fig.|1| where a shallow\nnetwork with one hidden layer corresponds to the CP decomposition, and a deep network with\nlogs (NV) hidden layers corresponds to the HT decomposition. The networks consists of just product\npooling and 1x1 conv layers. Having no point-wise activations between the layers, the non-linearity\nof the models stems from the product pooling operation itself. The pooling layers also control the\ndepth of the network by the choice of the size and the shape of pooling windows. The conv operator\nis not unlike the standard convolutional layer of ConvNets, with the sole difference being that it may\noverate without coefficient sharing. i.e. the filters that cenerate feature mans bv sliding across the\nThe rest of the article is organized as follows. In sec. 2] we briefly review mathematical background\non tensors required in order to follow our work. This is followed by sec. |3| which presents our\ngenerative model and its theoretical properties. How our model is trained is covered in sec.|4} and a\nthorough discussion on the importance of marginalization and its implications on our model is given\nin sec.|5] We conclude the article by presenting our experiments on classification with missing data\nin sec.|6} and revisit the main points of the article and future research in sec.[7\nArithmetic Circuits constructed from the above conv and product pooling layers are called Con\nvolutional Arithmetic Circuits, or ConvACs for short, first suggested by (Cohen et al.] (2016a) as ;\ntheoretical framework for studying standard convolutional networks, sharing many of the definin;\ntraits of the latter, most noteworthy, the locality, sharing and pooling properties of ConvNets. Unlik\ngeneral circuits, the structure of the network is determined solely by two parameters, the number o\nchannels of each conv layer and the size of pooling windows, which indirectly controls the dept!\nof the network. Any decomposition that corresponds to a ConvAC can represent any tensor, givet\nsufficient number of channels, though deeper circuits result in more efficient representations (Cohet\nk+1\nAP = {x eR x =1Vvie[k+1):a,> of\nA well-known observation, which has been verified in several empirical studies (e.g. by |Zoran\nand Weiss] (2011p), is that the distributions of local structures typically found in natural data could\nbe sufficiently modeled by a mixture model consisting of only few components (on the order of\n100) of simple distributions (e.g. Gaussian). Assuming the above holds for X \u20ac (R*)\u2122 and let\n{P(x|d; 64) aa be the mixing components, parameterized by 61, ..., x7, from which local struc-\ntures are generated, i.e. for all i \u20ac [N] there exist d; \u20ac [JM] such that x; ~ P(x|d;;a,), where d; is\na hidden variable specifying the matching component for the i-th local structure, then the probability\ndensity of sampling X is fully described by:\nwhere P(d,...,dy) represents the prior probability of assigning components dj,..., djy to their\nrespective local structures x;,...,x,. Even though we had to make an assumption on X to derive\nt is important to note that if we allow M to become unbounded, then any distribution with\nsupport in (R\u00b0) could be approximated by this equation. The argument follows from the universal-\nity property of the common parametric families of distributions (Gaussian, Laplacian, etc.), where\nany distribution can be approximated given sufficient number of components from these families,\nand thus the assumption always holds to some degree (see app.{Bl] for the complete proof).\nUnlike standard mixture models, we cannot perform inference directly from eq.|2| nor can we even\nstore the priors tensor directly given its exponential size of MW entries. Therefore the TMM as\npresented by eq. not tractable. The way to make the TMM tractable is to replace the tensot\nAai,...,.dy by a tensor decomposition and, as described in the previous section, this gives rise to\narithmetic circuits. But before we present our approach for tractable TMMs through tensor decom-\npositions, it is worth examining some of the TMM special cases and how they relate to other known\ngenerative models.\nFinally, since we are dealing with generative models, the tensors we study are non-negative and\nsum to one, i.e. the vectorization of A (rearranging its entries to the shape of a vector), denoted by\nvec(.A), is constrained to lie in the multi-dimensional simplex, denoted by:\nX = (xj,...,Xv) \u20ac (R*)*\nThis representation is quite natural for many high-dimensional input domains such as images \u2014\nwhere the local structures represent patches consisting of s pixels \u2014 voice through spectrograms,\nand text through words.\nThe prior probabilities P(d, ... dy) can also be represented by a tensor A \u20ac (R\u201d)\u00ae of order N,\ngiven that the vectorization of A is constrained to the simplex, i.e. vec(A) \u20ac A(M\u2122=1) (see eq. (ip.\nThus, we refer to eq. |2|as a Fensontal Mixture Model (TMM) with priors tensor A and mixing\ncomponents P(x|d,; 6 A pees P(x|dw; On). Notice that if N = 1 then we obtain the standard\nmixture model, whereas for a general N it is equivalent to a mixture model with tensorised mixing\nweights and conditionally independent mixing components.\nhidden layer 0 hidden layer L-1\ninput X representation) 1x1 conv product\npooling\n{f 1x1 conv product dense\n[we pooling (output)\n\nrep(i, d) = P(x;{d;;4a,)\n\nP(X)\nFigure 2: Inference of a TMM carried out by a ConvAC."}, {"section_index": "3", "section_name": "3.1 SPECIAL CASES", "section_text": "We have already shown that TMMs can be thought of as a special case of mixture models, but it is\nimportant to also note that diagonal Gaussian Mixture Models (GMMs), probably the most common\ntype of mixture models, are a strict subset of TMMs. Assume W/V = N - K, as well as:\nwe Vie [N], di=N-(k\u2014-1)+2\nP(di,-...dn) = 0 Otherwise\nKk\n\nP(X) = Do, ww TL, NC ev diag(o2,)) = SD, ww N's ity, diag(62))\nwhich is equivalent to a diagonal GMM with mixing weights w \u20ac A*\u2122 and Gaussian mixture\ncomponents with means { jz, }/_, and covariances {diag(a?)}*\nWhile the previous example highlights another connection between TMMs and mixture models,\nit does not take full advantage of the priors tensor, setting most of its entries to zero. Perhaps\nthe simplest assumption we could make about the priors tensor, without it becoming degener-\nate, would re \u00b0 assume Pld that the hidden variables d,,...,dy are statistically independent,\n\nie. P(dy,. =. ). Then rearranging eq [2}will result i in a product of mixture models:\nIf we also assume that the priors are identical in addition to being independent,\nie. P(dj =d) = ... = P(dy =), then this model becomes a bag-of-words model, where the\ncomponents { P(x|d; 0) }4_, define a soft dictionary for translating local-structures into \u2019words\u201d,\nas is often done when applying bag-of-words models to images. Despite this familiar setting, had we\nsubscribed to only using independent priors, we would lose the universality property of the general\nTMM model \u2014 it would not be capable of modeling dependencies between the local-structures."}, {"section_index": "4", "section_name": "3.2 DECOMPOSING THE PRIORS TENSOR", "section_text": "We have just seen that TMMs could be made tractable through constraints on the priors tensor, bu\nit was at the expense of either not taking advantage of its tensor structure, or losing its universality\nproperty. Our approach for tractable TMMs is to apply tensor decompositions to the priors tensor\nwhich is the conventional method for tackling the exponential size of high-order tensors.\nWe have already mentioned in sec. 2] that any decomposition representable by ConvACs, includ-\ning the well-known CP and HT decompositions, can represent any tensor, and thus applying them\nwould not limit the expressivity of our model. Fixing a ConvAC representing the priors tensor, i.e.\nPoe(d1,..., ,6n) = Aay,...,dy Where \u00a9 are the parameters of the ConvAC and {6;}%_, are the in-\ndicators representation of {d;}\u2018_,, and simply rearranging the terms of eq. 2Jafter substituting the\nentries of the priors tensor with the sums and products expression of \u00aeg(d,..... dn) results in:\nP(X) = Ge(q',...,q%) Vi \u20ac [N]Vd \u20ac [M], qi, = P(xild; = d)\nP(x|d; 0a) = N (x; p,,;, diag(o;,;)), d=N-(k-1)4\nP(X) = IL. ye _P(di = a) P(xi\\di = d; 8a)\nwhich is nearly equivalent to how the ConvAC is used for computing the entries of the priors\ntensor, differing only in the way the input vectors are defined. Namely, eq. ] is a result of\nUnlike general tensors, for a TMM to represent a valid distribution, the priors tensor is constrainec\nto the simplex and thus not every choice of parameters for the decomposition would result in <\ntensor holding this constraint. By restricting ourselves to non-negative decomposition parameters\ni.e. use positive weights in the 1x1 conv layers, it guarantees the resulting tensors would be non.\nnegative as well. Additionally, normalizing the non-negative tensor is equivalent to requiring the\nparameters to be restricted to the simplex, i.e. for every layer / and spatial position 7 the weigh\nvector w! \u20ac A\u201d-1~1 of the respective 1x1 conv kernel is normalized to sum to one. Unde:\nthese constraints we refer to it as a generative decomposition. Notice that restricting ourselves\nto generative decompositions does not limit the expressivity of our model, as we can still repre-\nsent any non-negative tensor and thus any distribution that the original TMM could represent. Ir\ndiscussing the above, it helps to distinguish between the two extreme cases of generative decompo:\nsitions representable by ConvACs, namely, the shallow Generative CP decomposition referred to a:\nthe GCP-model. and the deep Generative HT decomposition referred to as the GHT-model.\nNon-negative matrix and tensor decompositions have a long history together with the development\nof corresponding generative models, e.g., pLSA which uses non-negative ma-\ntrix decompositions for text analysis, which was later extended for images with the help of \u201cvi-\nsual words\u201d (Li and Peronal |2005). The non-negative variant of the CP decomposition presented\nabove is related to the more general Latent Class Models (Zhang) |2004), which could be seen as\na multi-dimensional pLSA. Likewise, the non-negative HT decomposition is related to the Latent\nTree Model (Zhang} |2004} [Mourad et al.| 2013) with the structure of a complete binary tree. Thus\nboth the GCP and GHT models can be represented as a two-level graphical model, where the top\nlevel is either an LCM or an LTM, and the bottom level represent the local structures which are\nconditionally sampled from the mixing components of the TMM.\nTo conclude, the application of ConvACs to decompose the priors tensor leads to tractable TMMs\n\nwith inference implemented by convo\n\nlutional networks, has deep roots to classical use of non-\n\nnegative factorizations of generative models, and given sufficient resources does not limit expressiv-\nity. However, practical considerations raise the question on the extent of the expressive capacity of\nour models when the size of the ConvAC is polynomial with respect to the number of local struc-\n\ntures and mixing components. This que\n\nthe importance of depth (Cohen et al,\nvNets (Cohen and Shashua showing the latter is less capable than ConvACs, and the ab\n\nhow the number of channels and size of pooling windows control the expressivity of the mode\n\nmore in depth overview of their results and its application to our models can be found in app.|C]\n\n\u2018stion was thoroughly studied in a series of works analyzing\n2016al , compared them to the expressive capacity of Con-\n\n\u2018lity\n\nof ConvACs to model the dependency structure typically found in natural data\nWe prove in app. [D]that their main results are not hindered by the introduction of sim,\nconstraints to ConvACs as we did above. Together these results give us a detailed understandin\n\nplex\ng of\nA"}, {"section_index": "5", "section_name": "3.3. COMPARISON TO SUM-PRODUCT NETWORKS", "section_text": "Picking the right SPN structure from the infinite possible combinations of sum and product nodes\n\ncould be perplexing even for experts in the field. Indeed|Poon and Domingos' (2011); Gens and\nhad to hand-engineer complex structures for each dataset guided by prior knowl-\nedge and heuristics, and while their results were impressive for their time, they are poor by current\nmeasures. This lead to many works studying the task of learning the structure directly from the\n\ndata itself Peharz etal 2073) Gens and Domingos) 2073 [Adel et a,|2015} Rooshenas and Lowa\n\nwhich indeed improved upon manually designed SPNs on some tasks. Nevertheless, when\nreplacing indicator vectors 6; with probability vectors q\u2019, which could be interpreted as a soft\nvariant of indicator vectors. Viewed as a network, it begins with a representation layer, map-\nping the local structures to the likelihood probabilities of belonging to each mixing component,\nie. {x }%,{P(xildi=d; 04)}2\"_,. Following the representation layer is the same ConvAC\ndescribed by \u00aeo(-,...,-). The complete network is illustrated by fig. [2]\nSum-Product Networks (SPNs) are a related class of generative models which are also realized by\nArithmetic Circuits, though not strictly convolutional circuits as defined above. While SPNs can\nrealize any ConvAC and thus are universal and posses tractable inference, their lack of structure\nputs them at a disadvantage.\ninput X representation\n\nM\n\nhidden layer 0\n\n1x1 conv\n\n[He\n\nproduct\npooling\n\nmo\n\nrep(i,d) = P(xi|di;a,)\n\nhidden layer L-1\n\n1x1 conv product\n\nLl L-\n\npooling\nGa\u20147\nPi\n\ndense\n(output)\nFigure 3: Classifier variant of TMM carried out by a ConvAC.\nAs opposed to SPNs, TMMs implemented with ConvACs have an easily designed architecture witt\nonly two set of parameters, size of pooling windows and number of channels, both of which can be\ndirectly related to the expressivity of the model as detailed in app.[C] Additionally, while SPNs are\ntypically trained using special EM-type algorithms, TMMs are trained using the stochastic gradien\ndescent type algorithms as is common in training neural networks (see sec. Al for details), thereby\nbenefiting from the shared experience of a large and growing community."}, {"section_index": "6", "section_name": "4. CLASSIFICATION AND LEARNING WITH TMMS", "section_text": "Until this point we presented the TMM as a generative model for high-dimensional data, which is\nuniversal, and whose structure is tightly coupled to that of convolutional networks. We have yet to\nincorporate classification and learning into our framework. This is the purpose of the current section.\nHeading on to predicting the class of a given instance, we note that in practice, naive implementatiot\nof ConvACs is not numerically stable, the reason being that high degree polynomials (as compute:\nby such networks) are easily susceptible to numerical underflow or overflow. The conventiona\nmethod for tackling this issue is to perform all computations in log-space. This transforms ConvAC\ninto SimNets, a recently introduced deep learning architecture Coher\nI.|[2016b). Finally, prediction is carried by returning the most likely class, which in the commo:\nsetting of uniform class priors (Pg (Y =y)=1/K), translates to simply predicting the class for whic!\nthe corresponding network output is maximal, in accordance with standard neural network practice\nY(X) =argmax, P(Y = y|X) = argmax, log P(X|Y = y)\nL(O) = E|\u2014 log Po(Y |X )} + E|\u2014 log Po(X)|\nwhere E[\u2014 log Pe(Y|X)] is commonly known as the cross-entropy loss, which we refer to as the\ndiscriminative loss, while E{[\u2014 log Pe (X)| corresponds to maximizing the prior likelihood P(X),\nand has no analogy in standard discriminative neural networks. It is this term that captures the\ngenerative nature of our model, and we accordingly refer to it as the generative loss. Now, let\nNo(X; y):=log Pe(X|Y=y) stand for the y\u2019th output of the SimNet (ConvAC in log-space)\nrealizing the TMM with parameters \u00a9, then in the case of uniform class priors, the empirical esti-\nmation of \u00a3(@) may be written as:\neNo(X\u2122; Y@) Isl)\n\n7 KO Ne(XOwy\n1 sl 08 D1 \u00a9\nThe common way to introduce object classes into a generative framework is to consider a class\nvariable Y, and the distributions P(X |\u2019) of the instance X conditioned on Y. Under our model this\nis equivalent to having shared mixing components, but different priors tensors P(d),...,dn|Y=y)\nfor each class. Though it is possible to decompose each priors tensor separately, it is much more\nefficient to employ the concept of joint tensor decomposition, and use a shared ConvAC instead. This\nresults in a single ConvAC computing inference, where instead of a single scalar output, multiple\noutputs are driven by the network \u2014 one for each class \u2014 as illustrated through the network in fig\nSuppose now that we are given a training set S$ = {(Xe(R*)X,YMe[K])}.), of instances\nand labels, and would like to fit the parameters \u00a9 of multi-class TMM according to the Maximum\nLikelihood method. Equivalently, we minimize the Negative Log-Likelihood (NLL) loss function:\n\u00a3(@) = E/\u2014 log Po(X, Y)], which can be factorized into two separate loss functions:\nMaximum likelihood training of generative models is oftentimes based on dedicated algorithm\nsuch as Expectation-Maximization, which are typically difficult to apply at scale. We leverage th\nresemblance between our objective (eq. and that of standard neural networks, and apply the sam\noptimization procedures used for the latter, which have proven to be extremely effective for trainin;\nclassifiers at scale. Whereas other works have used tensor decompositions for the optimization o\n\nprobabilistic models (Song et al.| 2013} Anandkumar et al.| 2014), we employ them strictly for mod\n\neling and instead make use of conventional methods. In particular, our implementation of TMM\n\nis based on the SimNets extension of Caffe toolbox (Cohen et al. |2016b) 2014), and use\n\nstandard Stochastic Gradient Descent-type methods for optimization (see sec.|6|for more details)."}, {"section_index": "7", "section_name": "5 CLASSIFICATION WITH MISSING DATA THROUGH MARGINALIZATION", "section_text": "A major advantage of generative models over discriminative ones lies in the ability to cope witl\nmissing data, specifically in the context of classification. By and large, discriminative method:\neither attempt to complete missing parts of the data before classification, known as data imputation\nor learn directly to classify data with missing values (Little and Rubin] |2002). The first of these\napproaches relies on the quality of data completion, a much more difficult task than the origina\none of classification with missing data. Even if the completion was optimal, the resulting classifier\nis known to be sub-optimal (see app The second approach does not make this assumption, bu'\nnonetheless assumes that the distribution of missing values at train and test tii are similar, <\ncondition which often does not hold in practice. Indeed, [Globerson and Roweis| coined the\nterm \u201cnightmare at test time\u201d to refer to the common situation where a classifier must cope witl\nmissing data whose distribution is different from that encountered in training.\nAs opposed to discriminative methods, generative models are endowed with a natural mechanism fo!\nclassification with missing data. Namely, a generative model can simply marginalize over missing\nvalues, effectively classifying under all possible completions, weighing each completion according\nto its probability. This, however, requires tractable inference and marginalization. We have already\nshown in sec. [3|that TMM support the former, and will show in sec. bring forth marginalizatior\nwhich is just as efficient. Beforehand, we lay out the formulation of classification with missing data.\nFollowing the works of |Rubin 1976); xs and Rubin| (2002), we consider three cases for the\n\nmissingness distribution O(/M=m]|4=x): missing completely at random (MCAR), where M i:\nindependent of %, i.e. Q)M=m|=x) is a function of m but not of x; missing at random (MAR)\nwhere M is independent of the missing values in \u00a5, ie. Q(/M=m|4\u2019=x) is a function of both m\nand x, but is not affected by changes in x; if m;=0; and missing not at random (MNAR), covering\nthe rest of the distributions for which M depends on missing values in \u00a5, i.e. QO) M=m|\u00a5V=x) is \u00a2\nfunction of both m and x, which at least sometimes is sensitive to changes in x; when m,;=0.\nLet P be the joint distribution of the object V, label VY, and missingness mask M\nP(=x, Yay, M=m) = D (K=x, Y=y) - O(M=m| =x)\nFor given x \u20ac R* and m \u20ac {0,1}*, denote by o(x,m) the event where the random vector 4\ncoincides with x on the coordinates i for which m; = 1. For example, if m is an all-zero vecto\no(x,m) covers the entire probability space, and if m is an all-one vector o(x,m) corresponds t\nthe event 1 = x. With these notations in hand, we are now in a position to characterize the optima\npredictor in the presence of missing data:\nClaim 1. For any data distribution D and missingness distribution Q, the optimal classificatior\nrule in terms of 0-1 loss is given by:\nh*(x@\u00a9m) = argmax, P(V=ylo(x, m))P(M=mlo(x, m), Y=y)\nLet XY be a random vector in R* representing an object, and Y be a random variable in\n[K]:={1,..., A} representing its label. Denote by D(\u2019, Y) the joint distribution of (7, ), and by\n(x\u20acR*, y\u20ac[K]) specific realizations thereof. Assume that after sampling a specific instance (x, y), a\nrandom binary vector M is drawn conditioned on Y=x. More concretely, we sample a binary mask\nme\u20ac{0, 1}* (realization of M) according to a distribution Q(-|4=x). x; is considered missing if m;\nis equal to zero, and observed otherwise. Formally, we consider the vector x\u00a9m, whose 7\u2019th coor-\ndinate is defined to hold x; if m;=1, and the wildcard x if m;=0. The classification task is then to\npredict y given access solely to x\u00a9m.\nh*(x \u00a9 m) = argmax, P(Y=yl|o(x, m))\nCorollary [I]indicates that in the MAR setting, which is frequently encountered in practice, optima\nclassification does not require prior knowledge regarding the missingness distribution Q. As lons\nas one is able to realize the marginalized Bayes predictor (eq. or equivalently, to compute the\nlikelihoods of observed values conditioned on labels (P(o(x, m)|Y =y)), classification with miss.\ning data is guaranteed to be optimal, regardless of the corruption process taking place. This is ir\nstark contrast to discriminative methods, which require access to the missingness distribution during\ntraining, and thus are not able to cope with unknown conditions at test time.\nMost of this section dealt with the task of prediction given an input with missing data, where wi\nassumed we had access to a complete and uncorrupted training set, and only faced missingness dur\ning prediction. However, many times we wish to tackle the reverse problem, where the training se\nitself is riddled with missing data. Generative methods can once again leverage their natural ability\nto handle missing data in the form of marginalization during the learning stage. Generative model.\nare typically learned through the Maximum Likelihood principle. When it comes to learning fron\nmissing data, the marginalized likelihood objective is used instead. Under the MAR assumption\n\nthis method results in an unbiased classifier (Little and Rubin} |2002)."}, {"section_index": "8", "section_name": "5.1 EFFICIENT MARGINALIZATION WITH TMMS", "section_text": "When the distribution Q is MAR (or MCAR), the classifier admits a simpler form, referred to as the\nAs discussed above, with generative models optimal classification with missing data (in the MAR\nsetting) is oblivious to the specific missingness distribution. However, it requires tractable compu-\ntation of the likelihood of observed values conditioned on labels, i.e. tractable marginalization over\nmissing values. The plurality of generative models that have recently gained attention in the deep\nlearning community (Goodfellow et al.| 2014} Kingma and Welling 2014} Dinh et al.| 2014} 2016)\ndo not meet this requirement, and thus are not suitable for classification with missing data. TMMs\non the other hand bring forth extremely efficient marginalization, requiring only a single forward\npass through the corresponding network. Details follow.\nV\n_, P&,\n\ndi; 9a;,)\nep(i,d) = 1 , X; is missing (marginalized)\nPAS @) = P(x;|d;9) \u2014, x; is visible (not marginalized)\nTo conclude, with TMMs marginalizing over missing values is just as efficient as plain inference \u2014\nrequires only a single pass through the corresponding ConvAC. Accordingly, the marginalized Bayes\npredictor (eq. 5p is realized efficiently, and classification with missing data (in the MAR setting) is\noptimal, regardless of the missingness distribution. This capability is not provided by discriminative\nmethods, which rely on the distribution of missing values being know at training, and by contempo-\nrary generative models, which do not bring forth tractable marginalization.\nTable 1: Blind classification with missing data on the binary MNIST dataset with feature deletion\nnoise according to/Globerson and Roweis| (2006), averaged over all pairs of digits.\nWe demonstrate the properties of our models through both qualitative and quantitative experiments.\nwe present our state-of-the-art results on image classification with missing data, with\nrobustness to various missingness distributions. In app.[G]we show visualizations produced by ou\nmodels, which gives us insight into its inner workings. Our experiments were conducted on the\nMNIST digit classification dataset, consisting of 60000 grayscale images of single digit numbers, as\nwell as the small NORB 3D object recognition dataset, consisting of 48600 grayscale stereo images\nof toys belonging to 5 categories: four-legged animals, human figures, airplanes, trucks, and cars\nIn all our experiments we use either the GCP or GHT model with Gaussian mixing components\nThe weights of the conv layers are partially shared as described in sec and are represented ir\nlog-space. For the case of the GHT model, we use 2 x 2 pooling windows for all pooling layers. We\ntrain our model according to the loss described in sec. a] using the Adam\nvariant of SGD and decaying learning rates. We apply L*-regularization to the weights while taking\ninto account they are stored in log-space. Additionally, we also adapt a probabilistic interpretatior\nof dropout (?) by introducing random marginalization layers, that randomly select spatial location:\nin the input and marginalize over them. We provide a complete and detailed description of ou\nexperiments in app.|F]"}, {"section_index": "9", "section_name": "5.1 IMAGE CLASSIFICATION WITH MISSING DATA", "section_text": "The problem of learning classifiers which are robust to unforeseen missingness distributions at test\ntime was first proposed by {Globerson and Roweis] (2006). They suggested missing values could be\ndenoted by values which were deleted, i.e. their values were changed to zero, and a robust classifiet\nwould have to assume that any of its zero-value inputs could be the result of such a deletion process.\nand must be treated as missing. Their solution was to train a linear classifier and formulate the\noptimization as a quadric program under the constraint that N of its features could be deleted.\n\nIn |Dekel and Shamir ( , this solution was improved upon and generalized to other kinds ot\n\ncorruption beyond deletion as well as to an adversarial setting.\nWe follow the central experiment of these articles, conducted on binary classification of digits pairs\nfrom the MNIST dataset, where V non-zero pixels are deleted with uniform probability over the set\nof N non-zero pixel locations of the given image. We compare our method, using the deep GHT-\nN=0 25 50 75 100 125 150\n\nLP-Based 97.9 97.5 964 94.1 89.2 80.9 70.2\nGHT-model 98.5 98.2 97.8 96.5 93.9 87.1 76.3\nWe demonstrate the effectiveness of our method for classification with missing data of unknown\nmissingness distribution (see sec. |5), by conducting three kinds of experiments on the MNIST\ndataset, and an additional experiment on the NORB dataset. We begin by following the protocol\nof[Globerson and Roweis] \u2014 the binary classification problem of digit pairs with feature dele-\ntion noise \u2014 where we compare our method to the best known result on that benchmark Deke! and\nShamir] 7008). For our main experiment, we move to the harder multi-class digit classification un-\n\nler two different MAR missingness distributions, comparing against other methods which do not\nassume a specific missingness distribution. We repeat this experiment on the NORB dataset as well.\nFinally, our last experiment demonstrates the failure of purely discriminative methods to adapt to\npreviously unseen missingness distributions, underlining the importance of the generative approach\nto missing data. We do wish to emphasize that missing data is not typically found in most image\ndata, nevertheless, experiments on images with missing data are very common, for both classifi-\ncation and inpainting tasks. Additionally, there is nothing about our method, nor the methods we\ncompare it against, that is very specific to the image domain, and thus any conclusion drawn should\nnot be limited to the chosen datasets, but be taken in the broader context of the missing data problem.\n90\n80\n70\n60\n50\n40\n30\n20\n\nTest Accuracy (%)\n\n100\n90\n80\n70\n60\n50\n40\n30\n20\n10\n\nTest Accuracy (%)\n\n\u2018igure\n\nAl TT ie\n= 90F z\nfe @ KNN 7 & sot J\nIy W Zero t B70 Hew KNN |\n4+ Meant 4 \u00a9 60 || zerot\nJa-A GSN t 2 50 ||4-a Mean +\n>< NICE t Q >< GSN t |\n-m DPM + q 40 Hm Nice + q\nirae MP-DBM * % 30 [ex pemt |\n}o-\u00a9 GCP-model | \u00a9@ 59 ||o-0 acr-model\nWW GHT-model to [EZ cHEmede!\n, i 4 i A i i i i\n0.00 0.25 050 0.75 090 0.95 0.99 (17) (27) G7) (111) (2,11) (3,12) (2,15) (2,15) (3,15)\nProbability of Missing Pixels (Number of Rectangles, Width)\n(a) MNIST with i.i.d. corruption. (b) MNIST with missing rectangles.\nF 7 100 Fz 7 : : i r r r\nL Js 90h 1\nr 7 & sot |\nr 1 > 70\n[[a-&KNN 7 \u00a7 Gop t |\n[aa KNN\nDex Zero + B50 Zerot |\nH@ Mean t 1 <4 jim Mean + al\ntoe NICE t tH 30 |e nicer\n0-0 DPMt J 2@ a9 |o0 vemt\n|\u00a5-\u00a5 GHT-model -t |\n: : : | 10 [XZ SH mode! : : i : i \"|\n0.00 0.25 050 0.75 090 0.95 0.99 (17) (27) 7) (1,11) (2,11) (3,12) (2,15) (2,15) (3,15)\nProbability of Missing Pixels (Number of Rectangles, Width)\n(c) NORB with iid. corruption. (d) NORB with missing rectangles.\n4: Blind classification with missing data. Testing i.i.d. corruption with probability p fo\n\n-ach pixel. Testing missing rectangles corruption with N missing rectangles, each of width an\nsioht eqnalta W (7%) Accnraciee are ectimated fram the nlat afiGacdtellaw etallfoo1ah (+) Dat\nModel, solely against the LP-based algorithm of |Dekel and Shamir] (2 , which is the previou\n\nstate-of-the-art on this task. Due to the limited computational resources at the time, the origina\nexperiments were limited to training sets of just 50 images per digit. We have repeated their experi\nment, using the implementation kindly supplied to us by the authors, and increased the limit to 30(\nimages per digit, which is the maximal amount possible with our current computational resources\nThough it is possible to train our own models using much larger training sets, we have trained then\nunder the same limitations. Despite the fact that missingness distribution of this experiment is of th\nMNAR type, which our method was not guarantied to be optimal under, the test results (see table[I\nclearly show the large gap between our method and theirs. Additionally, whereas our method uses ;\nsingle model trained once and with no prior knowledge on the missingness distribution, their metho\nrequires training special classifiers for each value of NV, chosen through a cross-validation process\ndisqualifying it from being truly blind to the missingness distribution.\nWe continue to our main experiments on multi-class blind classification with missing data, where\nthe missingness distribution is completely unknown during test time, and a single classifier mus\nhandle all possible distributions. We simulate two kinds of MAR missingness distributions: (i) ar\niid. mask with a fixed probability p \u20ac [0, 1] of missing each pixel, and (ii) a mask composed of the\nunion of NV possibly overlapping rectangles of width and height equal to W, each with a randomly\nassigned position in the image, distributed uniformly. We evaluate both our shallow GCP-Mode\nas well as the deep GHT-Model against the most widely used methods for blind classification witt\nmissing data. We repeat these experiments on the MNIST and NORB datasets, the results of whict\nare presented in fig. [4]\nAs a baseline for our results, we use K-Nearest Neighbors (KNN) to vote on the most likely class o\na given example. We extend KNN to missing data by comparing distances using only the observec\nentries, i.e. for a corrupted instance x\u00a9m, and a clean image from the training set X, we compute\nd(x, x\u00a9m)= eins =1 Fis \u2014aj;;)\u201d. Though it scores better than the majority of modern methods w\u00ab\nhave compared, in practice KNN is very inefficient, even more so for missing data, which prevent:\nmost common memory and runtime optimizations typically employed to reduce its inefficiency\nAdditionally, KNN does not generalize well for more complex datasets, as is evident by its poo\nperformance on the clean test set of the NORB dataset.\nFigure 4: Blind classification with missing data. (alc) Testing i.i.d. corruption with probability p for\neach pixel. Testing missing rectangles corruption with N missing rectangles, each of width and\n\nhight equal to W. (*) Accuracies are estimated from the plot of|Goodfellow et al.|(2013). ({) Data\nimputation algorithms followed by a ConvNet. Raw results can be found in app.|H!\nDian Pest 0.25 0.50 0.75 0.90 0.95 0.99\n0.25 98.9 97.8 78.9 324 17.6 11.0\n0.50 99.1 986 94.6 68.1 37.9 12.9\n0.75 98.9 98.7 97.2 83.9 564 16.7\n0.90 97.6 97.5 96.7 89.0 71.0 21.3\n0.95 95.7 95.6 94.8 88.3 74.0 30.5\n0.99 87.3 86.7 85.0 78.2 66.2 31.3\niid. (rand) 98.7 98.4 97.0 87.6 70.6 29.6\nrects (rand) 98.2 95.7 83.2 54.7 35.8 17.5\n\nTest Accuracy (%)\nFigure 5: We compare ConvNets trained on one distribution while tested on others. Training on\nrandomly (rand) chosen distributions were also examined. (ap Trained on iid. corruption with\nprobability Pirain, while tested on i.id. corruption with probability pest. (bp Train and tested on the\nsame (fixed) missing rectangles distribution, against ones trained on randomly chosen distributions.\nAs discusses in sec. [5] data-imputation is the most common method to handle missing data of un-\nknown missingness distributions. Despite the popularity of this method, high quality data impu-\ntations are very hard to produce, amplified by the fact that classification algorithms are known to\nbe highly sensitive to even a small noise applied to their inputs (?). Even if we assume the data-\nimputation step was done optimally, it would still not give optimal performance under all MAR\nmissingness distributions, and under some settings could produce results which are only half as\ngood as our method (see app. |E| for such a case). In our experiments, we have applied several\ndata-imputations methods to complete the missing data, followed by classifying its outputs using\na standard ConvNet fitted to the fully-observed training set. We first tested naive heuristics, fill-\ning missing values with zeros or the mean pixel value computed over all the images in the dataset.\n\nWe then tested three generative models: GSN (Bengio et al. 2014), NICE (Dinh et al.| 2014) and\nDPM (Sohl-Dickstein et al.|/2015), which are known to work well for inpainting. GSN was omitted\n\nfrom the NORB experiments as we have not manage to properly train it on that dataset. Though the\ndata-imputation methods are competitive when only few of the pixels are missing, they all fall far\nbehind our models above a certain threshold, with more than 50 percentage points separating our\nGHT-model from the best data-imputation method under some of the cases. Additionally, all the\ngenerative models require very long runtimes, which prevents from using them in most real-world\napplications. While we tried to be as comprehensive as possible when choosing which inpainting\nmethods to use, some of the most recent studies on the subject, e.g. the works of|van den Oord et al.|\n(2016) and|Pathak et al. (2016), have either not yet published their code or only partially published\nit. We have also ruled out inpainting algorithms which are made specifically for images, as we did\nnot want to limit the implications of these experiments solely to images.\nWe have also compared ourselves to the published results of the MPDBM model\n\n). Unlike the previous generative models we tested, MPDBM is a generative classifier similar tc\nour method. However, unlike our model, MPDBM does not posses the tractable marginalization not\nthe tractable inference properties, and uses approximations instead. Its lesser performance under.\nlines the importance of these properties for achieving optimality under missing data. An additiona\nfactor might also be their training method, which includes randomly picking a subset of variable:\n\nto act as missing, which might have introduced a bias to the specific missingness distribution usec\nduring their training.\nIn order to demonstrate the ineffectiveness of purely discriminative models, we trained ConvNets\ndirectly on randomly corrupted instances according to pre-selected missingness distributions on the\nMNIST dataset. Unlike the previous experiments, we do allow prior knowledge about the missing-\nness distribution during training time. We found that the best results are achieved when replacing\nmissing values with zeros, and adding as an extra input channel the mask of missing values (known\nas flag data-imputation). The results (see fig.|5) unequivocally show the effectiveness of this method\nwhen tested on the same distribution it was trained on, achieving a high accuracy even when only\n10% of the pixels are visible. However, when tested on different distributions, whether on a com-\npletely different kind or even on the same kind but with different parameters, the accuracy drops\nby a large factor, at times by more than 35 percentage points. This illustrate the disadvantage of\nthe discriminative method, as it necessarily incorporates bias towards the corruption process it had\nseen during training, which makes it fail on other distributions. One might wonder whether it is\nVV iid. (rand) i :\n\n17.5\n\n0.99 = 95\n\u2014___ & 90}\n11.0 > 8\n12.9 8 35\n16.7 3 70}\n21.30 < Bila rects (fixed)\n30.5 % s5|]e-e rects (rana)\n31.3 50\n45\n29.6 a\n\nr r r i i i :\n\n7) (2,7) (3,7) (2,11) (2,11) (3,11) (1,15) (2,15) (3,15\n(Number of Rectangles, Width)\n\nTRY RARTTOT ccclth ealootene wantannlacn\npossible for a single network to be robust on more than a single distribution. We found out that\nthe latter is true, and if we train a network on multiple different missingness distributiong!] then the\nnetwork will achieve good performance on all such distributions, though at some cases not reaching\nthe optimal performance. However, though it is possible to train a network to be robust on more\nthan one distribution, the type of missingness distributions are rarely known in advance, and there is\nno known method to train a neural network against all possible distributions, limiting the effectivity\nof this method in practice.\nUnlike all the above methods, our GHT-model, which is trained only once on the clean dataset, matcl\nor sometimes even surpass the performance of ConvNets that are trained and tested on the same dis:\ntribution, showing it is achieving near optimal performance \u2014 as much as possible on any\ndistribution. Additionally, note that similar to ConvNets and according to the theory in app\ndeep GHT-model is decidedly superior to the shallow GCP-model. Experimenting on more comple\u00bb\ndatasets is left for further research. Progress on optimization and regularization of networks basec\non product pooling (even in log-space) is required, and ways to incorporate larger bxb convolu:\ntional operations with overlaps would be useful before we venture into larger and complex datasets\nNevertheless, our preliminary results demonstrate an overwhelming advantage of our TMM model:\ncompared to competing methods, both in terms of robustness to different types of missing data, a:\nwell as in terms of raw performance, with very wide gaps in absolute accuracy than the next bes\nmethod. at times as large as 50 percentage points more than the next best method."}, {"section_index": "10", "section_name": "7 SUMMARY", "section_text": "We have introduced a new family of probabilistic models, which we call Tensorial Mixture Models\nTMMs are based on a simple assumption on the data, which stems from known empirical results or\nnatural images, that gives rise to mixture models with tensorial structure represented by the prior:\ntensor. When the priors tensor is decomposed it gives rise to an arithmetic circuit which in turr\ntransforms the TMM into a Convolutional Arithmetic Circuit (ConvAC). A ConvAC correspond:\nto a shallow (single hidden layer) network when the priors tensor is decomposed by a CP (sum o:\nrank-1) approach and corresponds to a deep network when the decomposition follows the Hierarchi:\ncal Tucker (HT) model.\nThe ConvAC representation of a TMM possesses several attractive properties. First, the inference is\ntractable and is implemented by a forward pass through a deep network. Second, the architectural\ndesign of the model follows the deep networks community design, i.e., the structure of TMMs is\ndetermined by just two easily understood factors: size of pooling windows and number of channels.\nFinally, we have demonstrated the effectiveness of our model when tackling the problem of classifi-\ncation with missing data, leveraging TMMs unique ability of tractable marginalization which leads\nto optimal classifiers regardless of the missingness distribution.\nThere are several avenues for future research on TMMs which we are currently looking at, including\nother problems which TMMs could solve (e.g. semi-supervised learning), experimenting with othe:\nConvACs architectures (e.g. through different decompositions), and further progress on optimiza.\ntion and regularization of networks with product pooling."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Tameem Adel, David Balduzzi, and Ali Ghodsi. Learning the Structure of Sum-Product Networks via an\nSVD-based Algorithm. UAI, 2015.\nAnimashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompo-\nsitions for learning latent variable models. Journal of Machine Learning Research (), 15(1):2773-2832,\n2014.\nTal Ben-Nun, Ely Levy, Amnon Barak, and Eri Rubin. Memory Access Patterns: The Missing Piece of the\nMulti-GPU Puzzle. In Proceedings of the International Conference for High Performance Computing, Net-\nworking, Storage and Analysis, pages 19:1-19:12. ACM, 2015.\n\"Specifically, we trained the network by randomizing not only the corruption noise, but the parameters o\nthe corruption process (e.g. for i.i.d. corruption we sampled p for each image from a uniform distribution).\nYoshua Bengio, Eric Thibodeau-Laufer, Guillaume Alain, and Jason Yosinski. Deep Generative Stochastic\nNetworks Trainable by Backprop. In International Conference on Machine Learning, 2014.\nchard Caron and Tim Traynor. The Zero Set of a Polynomial. WSMR Report 05-02, 2005\nXi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: Inter.\npretable Representation Learning by Information Maximizing Generative Adversarial Nets. arXiv.org, June\n2016.\nAdam Coates, Andrew Y Ng, and Honglak Lee. An Analysis of Single-Layer Networks in Unsupervisec\nFeature Learning. International Conference on Artificial Intelligence and Statistics, pages 215-223, 2011.\nNadav Cohen and Amnon Shashua. SimNets: A Generalization of Convolutional Networks. In Advances i\nNeural Information Processing Systems NIPS. Deep Learning Workshop. 2014.\nNadav Cohen and Amnon Shashua. Inductive Bias of Deep Convolutional Networks through Pooling Geome\ntry. arXiv.org, May 2016b.\nNadav Cohen, Or Sharir, and Amnon Shashua. On the Expressive Power of Deep Learning: A Tensor Analysis\nIn Conference on Learning Theory COLT, May 2016a.\nLaurent Dinh, David Krueger, and Yoshua Bengio. NICE: Non-linear Independent Components Estimation.\narXiv.org, October 2014.\nLaurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP. arXiv.org, Ma\u2018\n2016.\nDennis Forster, Abdul-Saboor Sheikh, and J\u00e9rg Liicke. Neural Simpletrons - Minimalistic Probabilistic Net\nworks for Learning With Few Labels. arXiv.org, June 2015.\nRobert Gens and Pedro M Domingos. Discriminative Learning of Sum-Product Networks. Advances in Neura\nInformation Processing Systems, 2012.\nAmir Globerson and Sam Roweis. Nightmare at test time: robust learning by feature deletion. In Jnternational\nConference on Machine Learning. ACM, 2006.\nThomas Hofmann. Probabilistic latent semantic analysis. Morgan Kaufmann Publishers Inc., July 1999\nFurong Huang, Niranjan U N, Ioakeim Perros, Robert Chen, Jimeng Sun, and Anima Anandkumar. Scalabl\nLatent Tree Model and its Application to Health Analytics. In NIPS Machine Learning for Healthcare\nWorkshop, 2015.\nDavid M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. the Journal of machine\nLearning research, 3:993\u20141022, March 2003.\nNadav Cohen and Amnon Shashua. Convolutional Rectifier Networks as Generalized Tensor Decompositions\nIn International Conference on Machine Learnine. Mav 2016a.\nNadav Cohen, Or Sharir, and Amnon Shashua. Deep SimNets. In Computer Vision and Pattern Recognition\nCVPR, May 2016b.\n[an Goodfellow, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Multi-Prediction Deep Boltzmann Ma-\nchines. Advances in Neural Information Processing Svstems. 2013.\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross B Girshick, Sergio\nGuadarrama, and Trevor Darrell. Caffe: Convolutional Architecture for Fast Feature Embedding. CoRR\nabs/1202.2745, cs.CV, 2014.\nDiederik P Kingma, Danilo J Rezende, Shakir Mohamed, and Max Welling. Semi-Supervised Learning witl\nDeep Generative Models. In Advances in Neural Information Processing Systems, 2014.\nYan LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to documen\nrecognition. Proceedings of the IEEE, 86(11):2278\u20142324, 1998.\nYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436444, May 2015.\nFei-Fei Li and Pietro Perona. A Bayesian Hierarchical Model for Learning Natural Scene Categories. Computer\nVision and Pattern Recognition, 2:524\u2014531, 2005.\nRoderick J A Little and Donald B Rubin. Statistical analysis with missing data (2nd edition). John Wiley 8\nSons, Inc., September 2002.\nLars Maalge, Casper Kaae Sgnderby, Sgren Kaae Sgnderby, and Ole Winther. Auxiliary Deep Generativ\nModels. In International Conference on Machine Learning ICML, May 2016.\nAlireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial Autoen-\ncoders. arXiv.org, November 2015.\nRapha\u00e9l Mourad, Christine Sinoquet, Nevin Lianwen Zhang, Tengfei Liu, and Philippe Leray. A Survey o1\nLatent Tree Models and Applications. J. Artif. Intell. Res. (), cs.LG:157\u2014203. 2013.\nAndrew Y Ng and Michael I Jordan. On Discriminative vs. Generative Classifiers: A comparison of logistic\nregression and naive Bayes. In Advances in Neural Information Processing Systems NIPS, Deep Learning\nWorkshop, 2002.\nDeepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context Encoders\nFeature Learning by Inpainting. In Computer Vision and Pattern Recognition CVPR, May 2016.\nRobert Peharz, Bernhard C Geiger, and Franz Pernkopf. Greedy Part-Wise Learning of Sum-Product Networks\nIn Machine Learning and Knowledge Discovery in Databases, pages 612-627. Springer Berlin Heidelberg\nBerlin, Heidelberg, September 2013.\nHoifung Poon and Pedro Domingos. Sum-Product Networks: A New Deep Architecture. In Uncertainty in\nArtificail Intelligence, 2011.\nAmirmohammad Rooshenas and Daniel Lowd. Learning Sum-Product Networks with Direct and Indirec\nVariable Interactions. ICML, 2014.\nDonald B Rubin. Inference and missing data. Biometrika, 63(3):581\u2014592, December 1976\n[aesup Kim and Yoshua Bengio. Deep Directed Generative Models with Energy-Based Probability Estimation\narXiv.org, June 2016.\nDiederik P Kingma, Tim Salimans, and Max Welling. Improving Variational Inference with Inverse Autore-\ngressive Flow. In Advances in Neural Information Processing Systems, June 2016.\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved\nTechniques for Training GANs. In Advances in Neural Information Processing Systems, 2016.\nAmir Shpilka and Amir Yehudayoff. Arithmetic Circuits: A survey of recent results and open questions\nFoundations and Trends\u00ae) in Theoretical Computer Science, 5(3\u20144):207\u2014388, March 2010.\nLe Song, Mariya Ishteva, Ankur P Parikh, Eric P Xing, and Haesun Park. Hierarchical Tensor Decompositio\nof Latent Tree Graphical Models. JCML, pages 334-342, 2013.\nYaniv Taigman, Ming Yang, Marc\u2019 Aurelio Ranzato, and Lior Wolf. DeepFace: Closing the Gap to Human\nLevel Performance in Face Verification. In Computer Vision and Pattern Recognition CVPR. TEEE Com\nputer Society, June 2014.\nLucas Theis and Matthias Bethge. Generative Image Modeling Using Spatial LSTMs. In Advances in Neura\nInformation Processing Systems, 2015.\nDustin Tran, Rajesh Ranganath, and David M Blei. The Variational Gaussian Process. In International Con-\nference on Learning Representations ICLR, 2016.\nXiaogang Wang and Eric Grimson. Spatial Latent Dirichlet Allocation. Advances in Neural Informatior\nProcessing Systems, 2007.\nMatthew D Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. In Europea\nConference on Computer Vision. Springer International Publishing, 2014.\nNevin Lianwen Zhang. Hierarchical Latent Class Models for Cluster Analysis. Journal of Machine Learnin.\nResearch (), pages 697-723, 2004.\nDaniel Zoran and Yair Weiss. From learning models of natural image patches to whole image restoration.\nICCV, pages 479-486, 2011.\nha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep Unsupervised Learning\nx Nonequilibrium Thermodynamics. /nternation Conference on Machine Learning, 2015.\nAaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel Recurrent Neural Networks. In Inter-\n\ntian al Cankoeenee an Machine Donening MN1E\ninput\ncoordinates\n\nM\n9: = Ajai)\n\nconv(i, 2)\n\nhidden layer\n1x1 conv\n\nglobal\npooling\n\n= (a, 5:) Aan dn\n\npool(z) = nm. , conv(i, 2)\n\ndense\n(output)\n\n= (a, pool(:\nFigure 6: The decoding algorithm of the CP decomposition represented by an Arithmetic Circuit.\nWe begin by establishing the minimal background in the field of tensor analysis required for following ou\nwork. A tensor is best thought of as a multi-dimensional array Aq,,...,4y \u20ac IR, where Vi \u20ac [N],di \u20ac [Mi]\nThe number of indexing entries in the array, which are also called modes, is referred to as the order of th\ntensor. The number of values an index of a particular mode can take is referred to as the dimension of the mode\nThe tensor A \u20ac R\u00a51\u00ae---\u00ae\u2122\u201cN mentioned above is thus of order N with dimension M; in its i-th mode. Fo\nour purposes we typically assume that Md) My = M, and simply denote it as A \u20ac (R\u201d)\u00ae%.\nThe main concept from tensor analysis we use in our work is that of tensor decompositions. The most straight-\nforward and common tensor decomposition format is the rank-1 decomposition, also known as a CANDE-\nCOMP/PARAFAC decomposition, or in short, a CP decomposition. The CP decomposition is a natural exten-\nsion of low-rank matrix decomposition to general tensors, both built upon the concept of a linear combination\nof rank-1 elements. Similarly to matrices, tensors of the form v) @---@ vi) where v \u20ac R\u2122: are\nnon-zero vectors, are regarded as N-ordered rank-1 tensors, thus the rank- Z CP decomposition of a tensor A\nnaturally defined by:\nwhere {a** \u20ac RM }) feat and a \u20ac R% are the parameters of the decomposition. As mentioned above,\nfor N = 2 it is equivalent to low-order matrix factorization. It is simple to show that any tensor A can be\nrepresented by the CP decomposition for some Z, where the minimal such Z is known as its tensor rank.\n? More precisely, we use a special case of the canonical HT decomposition as presented in|\n[thal (2009 . In the terminology of the latter, the matrices A'77 are diagonal and equal to diag(a\u2019\ne notations from eq.[8).\nThe fundamental operator in tensor analysis is the tensor product. The tensor product operator, denoted by \u00ae,\n\nis a generalization of outer product of vectors (1-ordered vectors) to any pair of tensors. Specifically, let A and\nB be tensors of order P and Q respectively, then the tensor product A \u00ae B results in a tensor of order P + Q,\ndefined by: (A @ B)a,.....dpig = Ady....dp * Bdpsy..dpso*\nAnother decomposition we will use in this paper is of a hierarchical nature and known as the Hierarchical\nTucker decomposition (Hackbusch and Kiihn||2009), which we will refer to as HT decomposition. While the\nCP decomposition combines vectors into higher order tensors in a single step, the HT decomposition does that\nmore gradually, combining vectors into matrices, these matrices into 4th ordered tensors and so on recursivel\nin a hierarchically fashion. Specifically, the following describes the recursive formula of the HT decompositio:\nfor a tensor A \u20ac (R\u2122\u2019)\u00ae% where N\nL.ds\nget =\n\npe-lLajey\n\nTL-2\nL-1,jyy ,L\u20142,2j-la \u00bb~L\u20142,2j,\nda Q e@\not YS TT\norder\nTL-1\nM\n\nN\n= 30 a7 5ia = Aay,dy = ya Te \"id\n\nd=1\nThe above formula is better represented by the network illustrated in fig. 3} beginning with an input layer o!\nVN x VN M-dimensional indicator vectors arranged in a 3D array, followed by a 1 x 1 conv operator,\nglobal product pooling layer, and ends with a dense linear layer outputting Az, ,...,a,,. The conv operator is no\nunlike the standard convolutional layer of ConvNets, with the sole difference being that it may operate withou\ncoefficient sharing, i.e. the filters that generate feature maps by sliding across the previous layer may have\ndifferent coefficients at different spatial locations. This is often referred to in the deep learning community a:\na locally-connected operator (Taigman et al.|/2014). Similarly to the CP decomposition, retrieving the entrie:\nof a tensor from its HT decomposition can be computed by the circuit represented in fig.|7| where instead o\na single pair of conv and pooling layers there are log, N such pairs, with pooling windows of size 2. Thougt\nthe canonical HT decomposition dictates size 2 pooling windows, any pooling structure used in practice stil\nresults in a valid HT decomposition.\nArithmetic Circuits constructed from the above conv and product pooling layers are called Convolutional Arith\nmetic Circuits, or ConvACs for short, first suggested by [Cohen et al] s a theoretical framework fo.\nstudying standard convolutional networks, sharing many of the defining traits of the latter, most noteworthy\nthe locality, sharing and pooling properties of ConvNets. Unlike general circuits, the structure of the network\nis determined solely by two parameters, the number of channels of each conv layer and the size of pooling\nwindows, which indirectly controls the depth of the network.\n\u201cThe requirement for N to be a power of two is solely for simplifying the definition of the HT decomposi.\ntion. More generally, instead of defining it through a complete binary tree describing the order of operations\nthe canonical decomposition can use any balanced binary tree.\nwhere iv\n\n4 5 he. LV IS a pOWer OF LW |\n\nTo\n\npliJy 1,9,7,0,27-l,a 0,2j,a\not = So agra @ aI\n\na=1\nri-a\nLI7 LI gl-l.2j-lLe 1\u20141,25,0\nger = ag @ a\u00a2\nSa es Ne\na=1\norder 2'\u20141 order 2!\nTL-2\npL\u20141,5,\u00a5 _ L-1,jyy pL\u2014-2,2j-la p,b\u20142,2j,a\n? = da e@\nL-1,j, L-1jy ,L\u20142,2j-1a pL\u20142,25,\n; ny ak hay g j @ad\u00a2 J\na a eee\na=1 7\norder 4\n\nTL-1\n\nA= x ak gl-bhe g gba\n\norder\nwhere the parameters of the decomposition are the vectors ja\u2019\u2019\u2019\"\u20acIR\"!~1 Sle {0,. L\u20141},3\u20ac[N/2!], ye [rp] and the\ntop level vector a\u2019 \u20ac R-!, and the scalars ro,... >TL\u20141 \u00a9 N are referred to as the ranks of the decompo-\nsition. Similar to the CP decomposition, any tensor can be represented by an HT decomposition. Moreover,\nany given CP decomposition can be converted to an HT decomposition by only a polynomial increase in the\nnumber of parameters.\nThe relationship between tensor decomposition and networks arises from the simple observation that through\ndecomposition one can tradeoff storage complexity with computation where the type of computation consists\nof sums and products. Specifically, tensor decompositions could be seen as a mapping, that takes a tensor of\nexponential size and converts it into a polynomially sized representation, coupled with a decoding algorithm\nof polynomial runtime complexity to retrieve the original entries of tensor \u2014 essentially trading off space com-\nplexity for computational complexity. Examining the decoding algorithms for the CP and HT decompositions,\nind eq. B] respectively, reveal a shared framework for representing these algorithms vi via owe,\ngraphs of products and weighted sums, also known as Arithmetic Circuits (Shpilka and Yehudayoff| BF POTO) or\nSum-Product Networks . More specifically, these circuits take as input NV indicator\nctors .., On, representing the coordinates , dn), where 6; 1j=a, }, and output the value of\nIn the case of the CP decomposition, the. matching decoding circuit is defined by eq.|9]below:\ninput hidden layer 0\ncoordinates 1x1 conv .\npooling\n[\u2014mo ee\nM L\n5; = Ujaay ro ro\n\nconv(j.7) = (a7, 6,)\n\nhidden layer L-1\n(L=log,N)\n\n1x1 conv .\npooling\n\nTha Li\n\npoole.) = [] \u2014 convol\u2019,)\n\nBef 9a_1 94)\n\nJ E{1,2}\n\npooly_1(7)= Il conv, \u20141(j\u2019,7)\n\ndense\n(output)\n\n(a, pool, _(:))\nFigure 7: The decoding algorithm of the HT decomposition represented by an Arithmetic Circuit.\nProof. \\f F is the set of Gaussian PDFs over R* with diagonal covariance matrices, which is known to be\nPDF total set, then F\u00ae is the set of Gaussian PDFs over (IR*)Y with diagonal covariance matrices and the\nclaim is trivially true.\nM, N\n\nM N Mg c\nLe T)- Le TT Lwnnced| <5\ni=l j=l i=1 0 j=1k=1\n\n1\nM2\n\nJs0o-, y Abe on Tits} <e\n\n1\nCorollary 2. Let F be a PDF total set of PDFs over R*, then the family of TMMs with mixture components\n\nfrom F can approximate any PDF over (R yy arbitrarily well, given arbitrarily many components.\nC OVERVIEW ON THE EXPRESSIVE CAPACITY OF CONVOLUTIONAL\nARITHMETIC CIRCUITS AND ITS AFFECT ON TENSORIAL MIXTURE\nMODELS\nThe expressiveness of ConvACs has been extensively studied, and specifically the non-generative variants of\nour models, named CP-model and HT-model respectively. In{Gohen et al.](2016a) it was shown that ConvACs\nIn this section we prove the universality property of TMMs, as discussed in s We begin by taking note\nfrom functional analysis and define a new property called PDF total set, which milar in concept to a total\nset, followed by proving that this property is invariant under the cartesian product of functions, which entails\nthe universality of TMMs as a corollary.\nM,\n\n9(x) Lo TL oes\n\naf\n2\nposses the property known as complete depth efficiency. Namely, almost all functiong\"|realized by an HT-mod\u00ab\nof polynomial size, for them to be realized (or approximated) by a CP-model, require it to be of exponential siz\nIn other words, the expressiveness borne out of depth is exponentially stronger than a shallow network, almo:\nalways. It is worth noting that in the followup paper , the authors have shown th:\nthe same result does not hold for standard ConvNets \u2014 while there are specific instances where depth efficienc\nholds, it is not complete, i.e. there is a non-zero probability that a function realized by a polynomially size\ndeep ConvNet can also be realized by a polynomially sized shallow ConvNet. Despite the additional simple\nconstraints put on the parameters, complete depth efficiency does hold for the generative ConvACs of our work\nproof of which can be found in app.[D] which shows the advantage of the deeper GHT-model over the shallo\u2018\nGCP-model. Additionally, this illustrates how the two factors controlling the architecture \u2014 number of channel\nand size of pooling windows \u2014 control the expressive capacity of the GHT-model. While the above show\nwhy the deeper GHT-model is preferred over the shallow GCP-model, there is still the question of whether\npolynomially sized GHT-model is sufficient for describing the complexities of natural data. Though a complet\nand definite answer is unknown as of yet, there are some strong theoretical evidence that it might. One aspe<\nof being sufficient for modeling natural data is the ability of the model to describe the dependency structure\ntypically found in the data. In{Cohen and Shashual , the authors studied the separation rank \u2014 a measut\nof correlation, which for a given input partition, measures how far a function is from being separable \u2014 an\nfound that a polynomially sized HT-model is capable of exponential separation rank for interleaved partition:\ni.e. that it can model high correlations in local areas in the input. Additionally, for non-contiguous partition:\nthe separation rank can be at most polynomial, i.e. it can only model a limited correlation between far awa\nareas in the input. These two results combined suggest that the HT-model, and thus also our GHT-model, :\nespecially fit for modeling the type of correlations typically found in natural images and audio, even if it is onl\nof polynomial size. Finally, from an empirical perspective, convolutional hierarchical structures have show\ngreat success on multitude of different domains and tasks. Our models leverage these structures, taking thet\nto a probabilistic setting, which leads us to believe that they will be able to effectively model distributions i\npractice \u2014 a belief we verify by experiments."}, {"section_index": "12", "section_name": "D_ PROOF FOR THE DEPTH EFFICIENCY OF GENERATIVE CONVOLUTIONAL\nARITHMETIC CIRCUITS", "section_text": "In this section we prove that the depth efficiency property of ConvACs proved in (2016a) applies\nalso to the Generative ConvACs we have introduced in sec. More specifically, we prove the following\ntheorem, which is the generative analog of theorem | from (Cohen et al. 2016a):\nTheorem 1. Let AY be a tensor of order N and dimension M_in each mode, generated by the recursive\nformulas in eqs] under the simplex constraints introduced in sec. Definer := min{ro, M}, and consider\nthe space of all possible configurations for the parameters of the decomposition \u2014 falar \u20ac A\u2122it yup:\nIn this space, the generated tensor A\u201d will have CP-rank of at least r\u00ae/? almost everywhere (w.rt. the produc\nmeasure of simplex spaces). Put differently, the configurations for which the CP-rank of A\u201d is less than XP?\nform a set of measure zero. The exact same result holds if we constrain the composition to be \u201cshared\u201d, i.e. s\u00e9\nald? = als? and consider the space of {a7 \u20ac A\u2122-17\"},~ configurations.\nThe only differences between ConvACs and their generative counter-parts are the simplex constraints applied\nto the parameters of the models, which necessitate a careful treatment to the measure theoretical arguments of\nthe original proof. More specifically, while the k-dimensional simplex AS isa subset of the k + 1-dimensional\nspace R**\", it has a zero measure with respect to the Lebesgue measure over R*+!. The standard method\nto define a measure over A* is by the Lebesgue measure over R* of its projection to that space, i.e. let\n\\: R\u00ae > R be the Lebesgue measure over R\", p: Ret R*, p(x) = (@1,... ,ap) be a projection,\nand A c A\\* be a subset of the simplex, then the latter's measure is defined as \\(p(A)). Notice that p(A*)\nhas a positive measure, and moreover that p is invertible over the set p(A*), and that its inverse is given by\np\\(a1,...,@k) = (a1,...,2%,1 \u2014 an x;). In our case, the parameter space is the cartesian product\nof several simplex spaces of different dimensions, for each of them the measure is defined as above, and the\nmeasure over their cartesian product is uniquely defined by the product measure. Though standard, the choice\nof the projection function p above could be seen as a limitation, however, the set of zero measure sets in A\nis identical for any reasonable choice of a projection 7 (e.g. all polynomial mappings). More specifically, for\nany projection 7 : R**1 _y R* that is invertible over n(A*), a+ is differentiable, and the Jacobian of 7!\nis bounded over 7(A*), then a subset A C AX is of measure zero w.rt. the projection z iff it is of measure\nzero w.t.t. p (as defined above). This implies that if we sample the weights of the generative decomposition\n(eq. [8] with simplex constraints) by a continuous distribution, a property that holds with probability 1 under the\nstandard parameterization (projection p), will hold with probability 1 under any reasonable parameterization.\n*Almost all functions\u201d in this context means, that for any continuous distribution over the parameters of\nthe HT-model, with probability one the following statement is true for a function realized by an HT-model with\nmpled parameters.\nWe now state and prove a lemma that will be needed for our proof of theorem|1|\nLemma 1. Let M,N, K \u20ac N,1 <r < min{M, N} and a polynomial mapping A : RX > R\u2122*N (i.e.\nfor every i \u20ac [M],j \u20ac [N] then Ai; : R* > Ris a polynomial function). If there exists a point x \u20ac R* s.t.\nrank (A(x)) > r, then the set {x \u20ac IR* |rank (A(x)) < r} has zero measure.\nProof. Remember that rank (A(x)) > r iff there exits a non-zero r x r minor of A(x), which is polynomial\nin the entries of A(x), and so it is polynomial in x as well. Let \u00a2 = (\u201c) : (%) be the number of minors in A,\ndenote the minors by { f(x) }{1, and define the polynomial function f(x) = S7\u00a2_, fi(x)?. It thus holds that\nf(x) = 0 iff for all 7 \u20ac [ce] it holds that f;(x) = 0, ie. f(x) = 0 iff rank (A(x)) < r.\nSollowing the work of|Cohen et al. (2016a), our main proof relies on following notations and facts\nProof of theorem{]| Stemming from the above stated facts, to show that the CP-rank of AY is at least r/?, i\nis sufficient to examine its matricization [A] and prove that rank ([A]) > r*/?.\nNotice from the construction of [A], according to the recursive formula of the HT-decomposition, tha\nits entires are polynomial in the parameters of the decomposition, its dimensions are M*\u2019? each and tha\n1 <r\u2018? < M*\u201d. In accordance with the discussion on the measure of simplex spaces, for each vecto\nparameter a\u2019? \u20ac A\"-1~1, we instead examine its projection a'9'7 = p(alF7) \u20ac R\u2122-1~1, and notice tha\npi(ab47) is a polynomial mapping\" w.r.t._ a7. Thus, [A\u201d] is a polynomial mapping w.r.t. the projectec\nparameters {aI ia, and using lemma] it is sufficient to show that there exists a set of parameters fo\nwhich rank (| 4Y]) > 7/2.\nFett s\u2014 AY and r_ = 1, we will construct by induction over / = 1,...,La\n\n5 are \u00a9 La hiv i anks atrices {[gb47 are at least r2/?\nset of parameters, {a'?7}1,;,7, for which the ranks of the matrices {[6\u00b07]} ;<,w/atj,ye[ry] ave at least r*/?,\n\nwhile enforcing the simplex constraints on the parameters. More so, we\u2019ll construct these parameters s.t.\nalJ:7 = a7, thus proving both the \u201cunshared\u201d and \u201dshared\u201d cases.\n\nDenoting for convenience \u00a2\nFor the case / = 1 we have:\nro\nws > 1,57, 025-1, 0,23,\nIY a.\u201d Ya! J \u201cQa J,O\n1\noF .; = Yr t= JANIS\n/ 0 Otherwise\nwhich means rank ([\u00a2 aa) = r, while preserving the simplex constraints, which proves our inductive hy-\npothesis for / = 1.\n\u00b0As we mentioned earlier, p is invertible only over p(A*), for which its inverse is given bj\np (a1, 65, @k) = (@1,..., 0,1 \u2014 at a;). However, to simplified the proof and notations, we use p\ndefined here over the entire range R*-!, even where it does not serve as the inverse of p.\nNow, f(x) is a polynomial in the entries of x, and so it either vanishes on a set of zero measure, or it is the\n\nzero polynomial (see Caron and Traynor} for proof). Since we assumed that there exists x \u20ac R* st.\n\nrank(A(x)) > r, the latter option is not pos oO\nWe denote by [{.A] the matricization of an N-order tensor A (for simplicity, N is assumed to be\neven), where rows and columns correspond to odd and even modes, respectively. Specifically, if\nAe RM1*\"\u2122N | the matrix [A] has My - M3 -...+ My-\u20141 rows and M2 \u00ab My - - My columns,\nrearranging the entries of the tensor such that Aa. ..dy is stored in row index 1 + oy NP 3 (d2i\u2014 int\n\n1) ham Mo;-1 and column index 1 + 37; NP (dai \u2014 1) 12 41 Maj. Additionally, the matriciza-\ntion is a linear operator, i.e. for all scalars a1, a2 and tensors Ai, A2 with the order and dimensions\n\nin every mode, it holds that [a;A; + a2A2] = a1[Ai] + a2[Ap].\n\nThe relation between the Kronecker product (denoted by \u00a9) and the tensor product (denoted by \u00ae)\nis given by [A \u00ae B] = [A] \u00a9 [B].\n\nFor any two matrices A and B, it holds that rank (A \u00a9 B) = rank (A) - rank (B).\n\nLet Z be the CP-rank of A, then it holds that rank ([A]) < Z (see (Cohen et al.|\n\na) for proof).\nAssume now that rank [\"}) >r? / forall j\u2019 \u20ac [N/2!-!] and 7\u2019 \u20ac [ri\u20141]. For some specific choice\nof 7 \u20ac [N/2!] andy \u20ac [r;] we have:\nCorollary 3. Assume the mixing components M = {fi(x) \u20ac L?(R?)NL1(R*)}M4, are square integrabl\nprobability density functions, which form a linearly independent set. Consider a deep GHT-model of polynomia\nize whose parameters are drawn at random by some continuous distribution. Then, with probability 1, the\ndistribution realized by this network requires an exponential size in order to be realized (or approximated w.r.t\nthe L? distance) by the shallow GCP-model. The claim holds regardless of whether the parameters of the deer\nGHT-model are shared or not.\nProof. Given a coefficient tensor A, the CP-rank of A is a lower bound on the number of channels (denoted by\nZ in the body of the article) required to represent that tensor by the ConvAC following the CP decompositior\n\nas introduced in sec.\n\nAdditionally, since the mixing components are linearly independent, their products\n\n{TT fi(x:)| fi \u20ac M)} are linearly independent as well, which entails that any distribution representable\n\nby the TMM with mixing components M has a unique coefficient tensor A. From theorem\n\n[I the set of\n\nparameters of a polynomial GHT-model with a coefficient tensor of a polynomial CP-rank, the requirement for\n\na polynomial GCP-model realizing that distribution exactly, forms a set of measure zero.\nIt is left to prove, that not only is it impossible to exactly represent a distribution with an exponential coeffi-\ncient tensor by a GCP-model, it is also impossible to approximate it. This follows directly from lemma 7 in\n\nappendix B of|Cohen et al. as our meets the requirement of that lemma."}, {"section_index": "13", "section_name": "+ PROOF FOR THE OPTIMALITY OF MARGINALIZED BAYES PREDICTOR", "section_text": "In this section we give short proofs for the claims from se: on the optimality of the marginalized Baye:\npredictor under missing-at-random (MAR) distribution, when the missingness mechanism is unknown, as wel\nas the general case when we do not add additional assumptions. In addition, we will also present a counter ex\nample proving data imputation results lead to suboptimal classification performance. We begin by introducin;\nseveral notations that augment the notations already introduced in the body of the article.\nGiven a specific mask realization m \u20ac {0,1}*, we use the following notations to denote partial assignment:\nto the random vector 4\u2019. For the observed indices of 1\u2019, i.e. the indices for which m; = 1, we denote a partia\nassignment by \u00a5 \\ m = xo, where x, \u20ac IR? is a vector of length d, equal to the number of observed indices\nSimilarly, we denote by 1 Mm = xm a partial assignment to the missing indices according to m, where\nXm \u20ac R\u2122 is a vector of length d,,, equal to the number of missing indices. As an example of the notation\nfor given realizations x \u20ac R* and m \u20ac {0, 1}*, we defined in sec.[5]the event o(x, m), which using curren\nnotation is marked by the partial assignment V \\m = x\u00bb where xo matches the observed values of the vecto:\nx according to m.\nWith the above notations in place, we move on to prove claim|I] which describes the general solution to the\noptimal prediction rule given both the data and missingness distributions, and without adding any additional\nsumptions.\n\u00b0It is important to note that most commonly used distribution functions are square integrable, e.g. mos\nmembers of the exponential family s an distribution.\nrl.\n\nor =) aig! 1,2j7\u2014 Lag gl 1,2j,0\na=1\n\nrr\n=> (oF = Se aie git Le] g [gi hse]\n\na=1\nDenote My := ['~b27-b2] \u00a9 [g!-124-\u00a2] fora = 1,...,r1-1. By our inductive assumption, and by the\ngeneral property rank (A \u00a9 B) = rank(A) - rank (B), we have that the ranks of all matrices Mq are at least\n\npe = 2, Writing [647] = SOE} ali. Mg, and noticing that {M,} do not depend on\nali, we simply pick alyi7 = 1g=1, and thus 7 = Mj, which is of rank 2), This completes the proof\nof the theorem. oO\nProof of corollary[1| Using the same notation as in the previous proof, and denoting by x, the partial vector\ncontaining the observed values of x \u00a9 m, the following holds:\nP(M=m|X\\m=x,, XAm=xm) = P(M=m|4\\m=x,)\nWe have shown that P(M=mlo(x,m), \u00a5 = y) does not depend on y, and thus does not affect the optimal\nprediction rule in claim[]] It may therefore be dropped, and we obtain the marginalized Bayes predictor. Cc\nHaving proved that in the MAR setting, classification through marginalization leads to optimal performance,\nwe now move on to show that the same is not true for classification through data-imputation. Though there are\nmany methods to perform data-imputation, i.e. to complete missing values given the observed ones, all of these\nmethods can be seen as the solution of the following optimization problem, or more typically its approximation\ng(x \u00a9m) = argmax\nx/ERS AVi:m; =1+2\nProof of clai\nV1 loss\n\nFix an arbitrary prediction rule h. We will show that L(h*) < L(h), where L is the expected\nUri 1055.\n\n1 \u2014 L(R)=Exm,y)~(x.M.9) Lhcom)=y]\n-y \u00a5 _, Pi Mam, \u00a5=x, Y=y)Ln(aconm) =y\n\nme {0,1}*ye[k]\u201d\n\n-_> =/ P(M=m, X\\m=xo, \u00a5AM=Xm; V=Y) Lnceqmn)=yIXodXm\nRdo JRdm\n\nme\u20ac{0,1}*y\u20ac[k]\n\n1\u00bb Y[. da(xom)=\n\ner 1} ye[k]\n\n2 > Y[. da(xom)=\n\ners 1} ye[k]\n\n> P(X\\m=x,\ners 1}s Rdo\n\n:> P(X\\m=x,\ners 1}5 fo\n\n=1\u2014 L(h*)\n\nvio | P(M=m, X\\m=xo, \u00a5NM=xm, YV=y)dxm\n\nyP(M=m, \u00a5\\m=x., V=y)dxo\n\n) 2 licom)= yP(Y=y|\u00a5\\m=x.)P(M=m|V\\m=xo, V=y)dxo\nye lk]\n\n) Fla om)ayP(V=9|\u00a5\\m=xo)P(M=m|\u00a5\\m=xo, Y=y)dxo\nyelk]\nWhere (1) is because the output of h(x \u00a9 m) is independent of the missing values, (2) by marginalization,\n(3) by conditional probability definition and (4) because by definition h*(x \u00a9 m) maximizes the expression\nP(Y=y|V\\m=x,)P(M=m|X\\m=x., Y=y) w.rt. the possible values of y for fixed vectors m and xo.\nFinally, by replacing integrals with sums, the proof holds exactly the same when instances (1) are discrete.\nWe now continue and prove corollary[T] a direct implication of claim}1]which shows that in the MAR setting,\nthe missingness distribution can be ignored, and the optimal prediction rule is given by the marginalized Bayes\npredictor.\n= P(M=m, XN m=x,,|V\\m=x., Y=y)dxm\n\nRtm\n\n= P(XNM=Xm|V\\m=xo, Y=y) -P(M=m|e#nm=x\u00bb, \u00a5\\m=xo, Y=y)dXm\n\nRtm\n\n=1 P(XNM=Xm|V\\m=xo, V=y) -P(M=m|XNm=xXm, V\\m=Xo)dxm\n\nRdm\n\n=, f P(X NM=xXn|\u00a5\\m=xo,V=y) -P(M=m|%\\m=x.)dxn\n\nRdm\n=P(M=m|1\u2019\\m=x,) P(ANM=Xm|V\\m=Xo, V=y)dxXm\nRtm\n\u2014P(M=\u2014miol(x m))\n= P(M=m, XN m=x,,|V\\m=x., Y=y)dxm\n\nRtm\n\n= P(XNM=Xm|V\\m=xo, Y=y) -P(M=m|e#nm=x\u00bb, \u00a5\\m=xo, Y=y)dXm\n\nRtm\n\n=1 P(XNM=Xm|V\\m=xo, V=y) -P(M=m|XNm=xXm, V\\m=Xo)dxm\n\nRdm\n\n=. f P(XAm=xpn|\u00a5\\\\m=x,, V=y) -P(M=m|\u00a5\u2019\\m=x,)dxn\nRdm\n\n=P(M=m|*\\m=x,) P(ANM=Xm|V\\m=Xo, V=y)dxXm\nd.\n\nRdm\n\n=P(M=m|o(x, m))\nWhere (1) is due to the independence assumption of the events Y = y and M = m conditioned on V = x,\nwhile noting that (V \\m = xz) A(& Mm = xm) is a complete assignment of V. (2) is due to the MAR\numption, i.e. that for a given m and x, it holds for all x, \u20ac Rim.\nX, Xy Y Weight Probability (ce = 10~*)\n0 0 0 l-e 16.665%\n0 1 0 16.667%\n1 0 0 l-e 16.665%\n1 1 0 16.667%\n0 0 1 0 0.000%\n0 1 1 16.668%\n1 0 1 0.000%\n1 1 1 16.668%\nClaim 3. There exists a data distribution D and MAR missingness distribution Q s.t. the accuracy of classi-\nfication through data-imputation is almost half the accuracy of the optimal marginalized Bayes predictor, witl\nan absolute gap of more than 33 percentage points.\nProof. For simplicity, we will give an example for a discrete distribution over the binary se\nX x Y= {0,1}\u00b0 x {0,1}. Let 1 > \u20ac > 0 be some small positive number, and we define D according to table|2\nwhere each triplet (v1, 22, y) \u20ac Vx) is assigned a positive weight, which through normalization defines <\ndistribution over V x Y. The missingness ibution Q is defined s.t. Po(My = 1, Mz = 0|X = x) = 1 fo\nall x \u20ac \u00a5,i.e. X1 is always observed and X2 is always missing, which is a trivial MAR distribution. Given the\nabove data distribution D, we can easily calculate the exact accuracy of the optimal data-imputation classifie!\nand the marginalized Bayes predictor under the missingness distribution Q, as well as the standard Bayes pre-\ndictor under full-observability. First notice that whether we apply conditional or unconditional data-imputation\nand whether X, is equal to 0 or 1, the completion will always be X2 = 1 and the predicted class will alway:\nbe Y = 1. Since the data-imputation classifiers always predict the same class Y = 1 regardless of their input\nthe probability of success is simply the probability P(Y = 1) = its (for \u00a2 = 10~* it equals approximatels\n3.337%). Similarly, the marginalized Bayes predictor always predicts Y = 0 regardless of its input, and s\u00a2\nits probability of success is P(Y = 0) = ars (for \u20ac = 10~* it equals approximately 66.663%), which i:\nalmost double the accuracy achieved by the data-imputation classifier. Additionally, notice that the marginal.\nized Bayes predictor achieves almost the same accuracy as the Bayes predictor under full-observability, whict\nequals exactly 2."}, {"section_index": "14", "section_name": "F DETAILED DESCRIPTION OF THE EXPERIMENTS", "section_text": "Experiments are meaningful only if they could be reproduced by other proficient individuals. Providing suf-\nficient details to enable others to replicate our results is the goal of this section. We hope to accomplish this\nby making our code public, as well as documenting our experiments to a sufficient degree allowing for their\nreproduction from scratch. Our complete implementation of the models presented in this paper, as well as our\nmodifications to other open-source projects and scripts used in the process of conducting our experiments, are\navailable at our Github repository: Inttps: //github.com/HUJI-Deep/TMM| We additionally wish to\ninvite readers to contact the authors, if they deem the following details insufficient in their process to reproduce\nour results."}, {"section_index": "15", "section_name": "F.1 DESCRIPTION OF METHODS", "section_text": "In the following we give concise descriptions of each classification method we have used in our experiments.\nThe results of the experiment on MP-DBM (Goodfellow et al.|/2013) were taken directly from the paper and\nTable 2: Data distribution over the space \u00a5 x Y = {0,1}\u00b0 x {0,1} that serves as the example for\nthe sub-optimality of classification through data-imputation (proof of claim[3).\nWhere g(x \u00a9m) is the most likely completion of x\u00a9m. When data-imputation is carried out for classification\npurposes, one is often interested in data-imputation conditioned on a given c! Y=y,ie.:\nGiven a classifier h : R* \u2014 [4] and an instance x with missing values according to m, classification through\ndata-imputation is simply the result of applying h on the output of g. When h is the optimal classifier for\ncomplete data, i.e. the Bayes predictor, we end up with one of the following prediction rules:\nonditional: h(x \u00a9 m) = argmax P()Y = y|V = g(x \u00a9 m;\nYy\nwere not conducted by us, hence we do not cover it in this section. We direct the reader to that article for exact\ndetails on how to reproduce their results."}, {"section_index": "16", "section_name": "F.1.1 ROBUST LINEAR CLASSIFIER", "section_text": "In[Dekel and Shamir} , binary linear classifiers were trained by formulating their optimization as a quadri:\nprogram under the constraint that some of its features could be deleted, i.e. their original value w: angec\nto zero. While the original source code was never published, the authors have kindly agreed to share with u:\ntheir code, which we used to reproduced their results, but on larger datasets. The algorithm has only a coupl\nhyper-parameters, which were chosen by a grid-search through a cross-validation process. For details on th\u00e9\nexact protocol for testing binary classi issing data, please see sec."}, {"section_index": "17", "section_name": "F.1.2. K-NEAREST NEIGHBORS", "section_text": "K-Nearest Neighbors (KNN) is a classical machine learning algorithm used for both regression and classifica-\ntion tasks. Its underlying mechanism is finding the & nearest examples (called neighbors) from the training set.\n(x1, y1),---, (Xk, ye) \u20ac S, according to some metric function d(-,-) : \u00a5 x XY \u2014 Ry}, after which a summa-\nrizing function f is applied to the targets of the k nearest neighbors to produce the output y* = f(y1,..-, yr).\nWhen KNN is used for classification, f is typically the majority voting function, returning the class found in\nmost of the k nearest neighbors."}, {"section_index": "18", "section_name": "F.1.3 CONVOLUTIONAL NEURAL NETWORKS", "section_text": "The most widespread and successful discriminative method nowadays are Convolutional Neural Net-\nworks (ConvNets). Standard ConvNets are represented by a computational graph consisted of different kinds\nof nodes, called layers, with a convolutional-like operators applied to their inputs, followed by a non-linear\npoint-wise activation function, e.g. max(0,) known as ReLU.\nFor our experiments on MNIST, both with and without mi we have used the LeNeT ConvNet ar-\nchitecture that is bundled with Caffe trained for 20,000 iterations using\nSGD with 0.9 momentum 0.01 base learning rate, whi tant for 10,000 iterations, followec\nby a linear decrease to 0.001 for another 5,000 iterations, followed by a linear decrease to 0 learning rate fot\nthe remaining 5,000 iterations. The model also used [2-regularization (also known as weight decay), whict\nwas chosen through cross-validation for each experiment separately. No other modifications were made to the\nmodel or its training procedure.\nFor our experiments on NORB, we have used an ensemble of 3 ConvNets, each using the following architecture\n5x5 convolution with 128 output channels, 3x3 max pooling with stride 2, ReLU activation, 5x5 convolutio:\nwith 128 output channels, ReLU activation, dropout layer with probability 0.5, 3x3 average pooling wit\nstride 2, 5x5 convolution with 256 output channels, ReLU activation, dropout layer with probability 0.5\n3x3 average pooling with stride 2, fully-connected layer with 768 output channels, ReLU activation, dropou\nlayer with probability 0.5, and ends with fully-connected layer with 5 output channels. The stereo image\nwere represented as a two-channel input image when fed to the network. During training we have used dat\naugmentation consisting of randomly scaling and rotation transforms. The networks were trained for 40,001\niterations using SGD with 0.99 momentum and 0.001 base learning rate, which remained constant for 30,001\niterations, followed by a linear decrease to 0.0001 for 6000 iterations, followed by a linear decrease to 0 learnin;\nrate for the remaining 4,000 iterations. The model also used 0.0001 weight decay for additional regularization\nWhen ConvNets were trained on images containing missing values, we passed the network the original imag\nwith missing values zeroed out, and an additional binary image as a separate channel, containing 1 for missin;\nvalues at the same spatial position, and 0 otherwise \u2014 this missing data format is sometimes known as fla:\ndata imputation. Other formats for representing missing values were tested (e.g. just using zeros for missins\nvalues), however, the above scheme performed significantly better than other formats. In our experiments, we\nassumed that the training set was complete and missing values were only present in the test set. In order t\ndesign ConvNets that are robust against specific missingness distributions, we have simulated missing value:\nduring training, sampling a different mask of missing values for each image in each mini-batch. As coverec\nins {6} the results of training ConvNets directly on simulated missingness distributions resulted in classifier:\nIn our experiments we use KNN for classification with missing data, where the training set consists of complete\nexamples with no missing data, but at classification time the inputs have missing values. Given an input\nwith missing values x \u00a9 m and an example x\u2019 from the training set, we use a modified Euclidean distance\nmetric, where we compare the distance only against the non-missing coordinates of x, i.e. the metric is defined\nby d(x\u2019,x\u00ae@m) = Do, my=1 (xi \u2014 wi)\u2019. Through a process of cross-validation we have chosen k = 5 for all\nof our experiments. Our implementation of KNN is based on the popular scikit-learn python library\nIn addition to training ConvNets directly on missing data, we have also used them as the classifier for testing\ndifferent data imputation methods, as describe in the next section."}, {"section_index": "19", "section_name": "F.1.4 CLASSIFICATION THROUGH DATA IMPUTATION", "section_text": "We have tested the following generative models:\nFor a complete theoretical description of our model please see the body of the article. Our models wer\u00ab\nimplemented by performing all intermediate computations in log-space, using numerically a aware \u00a9 operations. Ir\npracticed, that meant our models were realized by the SimNets architecture Coher\nfetal 2OT6E). which consists of Similarity layers representing gaussian distributions, MEX Tayers representing\nweighted sums performed on log-space input and outputs, as well as standard pooling operations. The learnec\nparameters of the MEX layers are called offsets, which represents the weights of the weighted sum, but saved ir\nlog-space. The parameters of the MEX layers can be optionally shared between spatial regions, or alternatively\nleft with no parameter sharing at all. Additionally, when used to implement our generative models, the offset:\nare normalized to have a soft-max (i.e., log (, exp(ai))) of zero.\nWe first describe the architectures used for the MNIST dataset. For the GCP-model, we used MZ = 800, and\nfollowing the similarity layer is a 1 x 1 MEX layer with no parameter sharing over spatial regions and 10\noutput channels. The model ends with a global sum pooling operation, followed by another 1 x 1 MEX layer\nwhich were biased towards the specific distribution used in training, and performed worse on other distributions\ncompared to ConvNets trained on the same distribution.\nThe most common method for handling missing data, while leveraging available dis\nthrough the application of data imputation \u2014 an algorithm for the completion of missing values \u2014 and then\n\npassing the results to a classifier trained on uncorrupted dataset. We have tested five different types of data\nimputation algorithms:\ne Zero data imputation: replacing every missing value by zero.\ne Mean data imputation: replacing every missing value by the mean value computed over the dataset.\n\ne Generative data imputation: training a generative model and using it to complete the missing value:\nby finding the most likely instance that coincides with the observed values, i.e. solving the followin:\ng(x \u00a9 m) = argmax P(A =x,\n\nx/ ERS AVi,mj=1o 2) =\n- Generative Stochastic Networks (GSN) 2014): We have used their origina\nsource code from and trained their example mode\non MNIST for 1000 epochs. Whereas in the original article they have tested completing onl\nthe left or right side of a given image, we have modified their code to support general masks\nOur modified implementation can be found at}https: //github.com/HUJI-Deep/GSN\n\n\u2014- Non-linear Independent Components Estimation (NICE) (2014): We have used thei\noriginal source code from https: //github.com/laurent-dinh/nice and trained i\non MNIST using their example code without changes. Similarly to our modification to the GS}\ncode, here too we have adapted their code to support general masks over the input. Additionally\ntheir original inpainting code required 110,000 iterations, which we have reduced to just 8,00\niterations, since the effect on classification accuracy was marginal. For the NORB dataset, wi\nhave used their CIFAR10 example, with lower learning rate of 10~*. Our modified code can b\nfound at/https://github.com/HUJI-Deep/nice\n\n\u2014 Diffusion Probabilistic Models (DPM) : We have use\n\ntheir original source code from {https://github.com/Sohl-Dickstein,\n\nusing thei\nexample code without changes. Similarly to our modifications to GSN, we hav\nadd support for a general mask of missing values, but other than that kept the res\nof the parameters for inpainting unchanged. For NORB we have used the sam\n\nmodel as MNIST. We have tried using their CIFARIO example for NORB, how\never, it produced exceptions during training. Our modified code can be found a\n\nhttps://github.com/HUJI-Deep/Diffusion-Probabilistic-Models\nThe network architectures we have tested in this article, consists of M different Gaussian mixture components\nwith diagonal covariance matrices, over non-overlapping patches of the input of size 2 x 2, which were imple-\nmented by a similarity layer as specified by the SimNets architecture, but with an added gaussian normalization\nterm.\nwith 10 outputs, one for each class. The GHT-model starts with the similarity layer with MJ = 32, followed\nby a sequence of four pairs of 1 x 1 MEX layer followed by 2 x 2 sum pooling layer, and after the pairs and\nadditional 1 x 1 MEX layer lowering the outputs of the model to 10 outputs as the number of cla: The\nnumber of output channels for each MEX layer are as follows 64-128-256-512-10. All the MEX layers in this\nnetwork do not use parameter sharing, except the first MEX layer, which uses a repeated sharing pattern of\n2 x 2 offsets, that analogous to a 2 x 2 convolution layer with stride 2. Both models were trained with the\nlosses described in si using the Adam SGD variant for optimizing the parameters, with a base learning\nrate of 0.03, and 6; = 82 = 0.9. The models were trained for 25,000 iterations, where the learning rate was\ndropped by 0.1 after 20,000 iterations.\nFor the NORB dataset, we have trained only the GHT-model with M7 = 128 for the similarity layer. The MEX\nlayers use the same parameter sharing scheme as the one for MNIST, and the number of output channels for\neach MEX layer are as follows: 256-256-256-512-5. Training was identical to the MNIST models, with the\nexception of using 40,000 iterations instead of just 25,000. Additionally, we have used an ensemble of 4 models\ntrained separately, each trained using a different generative loss weight (see below for more information). We\nhave also used the same data augmentation methods (scaling and rotation) which were used in training the\nConvNets for NORB used in this article.\nThe standard L2 weight regularization (sometimes known as weight decay) did not work well on our mod-\nels, which lead us to adapt it to better fit to log-space weights, by minimizing A 5\u00b0; (exp (ai))? instead of\nA\\|x|l2_ = 30, x?, where the parameter \\ was chosen through cross-validation. Additionally, since ever\nwith large values of \\ our model was still overfitting, we have added another form of regularization in the form\nof random marginalization layers. A random marginalization layer, is similar in concept to dropout, but instead\nof zeroing activations completely in random, it choses spatial locations at random, and then zero out the activa-\ntions at those locations for all the channels. Under our model, zeroing all the activations in a layer at a specific\nlocation, is equivalent to margi ing over all the inputs for the receptive field for that respective location.\nWe have used random marginalization layers in between all our layers during training, where the probability\nfor zeroing out activations was chosen through cross-validation for each layer separately. Though it might raise\nconcern that random marginalization layers could lead to biased results toward the missingness distributions\nwe have tested it on, in practice the addition of those layers only helped improve our results under cases where\nonly few pixels where missing.\nFinally, we wish to discuss a few optimization tricks which had a minor effects compared to the above, but wer\u00ab\nnevertheless very useful in achieving slightly better results. First, instead of optimizing directly the objective\ndefined by eq./4| we add smoothing parameter 3 between the two terms, as follows:\nSs {S|\nx @)\nOy\n(\neNe\nIs]\n\nOry\n\n(x\n\nNe\n\n> log \u00bb e\nXOw)\n\nlog =F\n\nD\n\n\u201c= = argmin \u2014\n\n)\n\u2018tting 8 too low diminish the generative capabilities of our models, while setting it too high diminish the\ncriminative performance. Through cross-validation, we decided on the value 6 = 0.01 for the model:\ntrained on MNIST, while for NORB we have used a different value of 6 for each of the models, ranging ir\n{0.01,0.1,0.5, 1}. Second, we found that performance increased if we normalized activations before applying\nthe 1 x 1 MEX operations. Specifically, we calculate the soft-max over the channels for each spatial locatior\nwhich we call the activation norm, and then subtract it from every respective activation. After applying the\nMEX operation, we add back the activation norm. Though might not be obvious at first, subtracting a constan'\nfrom the input of a MEX operation and adding it to its output is equivalent does not change the mathematical\noperation. However, it does resolve the numerical issue of adding very large activations to very small offsets.\nwhich might result in a loss of precision. Finally, we are applying our model in different translations of the inpu'\nand then average the class predictions. Since our model can marginalize over inputs, we do not need to cror\nthe original image, and instead mask the unknown parts after translation as missing. Applying a similar trick\nto standard ConvNets on MNIST does not seem to improve their results. We believe this method is especially\nfit to our model, is because it does not have a natural treatment of overlapping patches like ConvNets do, anc\nbecause it is able to marginalize over missing pixels easily, not limiting it just to crop translation as is typically\ndone."}, {"section_index": "20", "section_name": "F.2 DESCRIPTION OF EXPERIMENTS", "section_text": "ction we will give a detailed description of the protocol we have used during our experiment:\nThis experiment focuses on the binary classification problem derived from MNIST, by limiting the number of\nclasses to two different digits at a time. We use the same non-zero feature deletion distribution as suggested by\n\nGloberson and Roweis' i.e. for a given image we uniformly sample a set of N non-zero pixels from the\nimage (if the image has less than N non-zero pixels then they are non-zero pixels are chosen), and replace thei\nvalues with zeros. This type of missingness distribution falls under the MNAR type defined in sec]5]\nWe test values of N in {0, 25,50, 75, 100, 125, 150}. For a given value of N, we train a separate classifier\non each digit pair classifier on a randomly picked subset of the dataset containing 300 images per digit (600\ntotal). During training we use a fixed validation set with 1000 images per digit. After picking the best classifier\naccording to the validation set, the classifier is tested against a test set with a 1000 images per digits with a\nrandomly chosen missing values according to the value of N. This experiment is repeated 10 times for each\ndigit pair, each time using a different subset for the training set, and a new corrupted test set. After conducting\nall the different experiments, all the accuracies are averaged for each value of N, which are reported in table[1]"}, {"section_index": "21", "section_name": "7.2.2 MULTI-CLASS DIGIT CLASSIFICATION WITH MAR MISSING DATA", "section_text": "This experiment focuses on the complete multi- digit classification of the MNIST dataset, in the presence\nof missing data according to different missingness distributions. Under this setting, only the test set contains\nmissing values, whereas the training set does not. We test two kinds of missingness distributions, which both\nfall under the MAR type defined in sec]|5| The first kind, which we call i.i.d. corruption, each pixel is missing\nwith a fixed probability p. the second kind, which we call missing rectangles corruption, The positions of N\nrectangles of width W or chosen uniformly in the picture, where the rectangles can overlap one another. During\nthe training stage, the models to be tested are not to be biased toward the specific missingness distributions we\nhave chosen, and during the test stage, the same classifier is tested against all types of missingness distributions,\nand without supplying it with the parameters or type of the missingness distribution it is tested against. This\nrule prevent the use of ConvNets trained on simulated missingness distributions. To demonstrate that the latter\nlead to biased classifiers, we have conducted a separate experiment just for ConvNets, where the previous rule is\nignored, and we train a separate ConvNet classifier on each type and parameter of the missingness distributions\nwe have used. We then tested each of those ConvNets on all other missingness distributions, the results of\nwhich are in fig.[5] which confirmed our hypothesi"}, {"section_index": "22", "section_name": "S IMAGE GENERATION AND NETWORK VISUALIZATION", "section_text": "Following the graphical model perspective of our models allows us to not only generate random instances fro!\nthe distribution, but to also generate the most likely patches for each neuron in the network, effectively explair\ning its role in the classification process. We remind the reader that every neuron in the network corresponds to\npossible assignment of a latent variable in the graphical model. By looking for the most likely assignments fc\neach of its child nodes in the graphical tree model, we can generate a patch that describes that neuron. Unlik\nmilar suggested methods to visualize neural networks , often relying on brute-forc\nsearch or on solving some optimization problem to find the most likely image, our method emerges natural]\nfrom the probabilistic interpretation of our model.\nIn fig.8] we can see conditional samples generates for each digit, while in fig. P]we can see a visualization of the\ntop-level layers of network, where each small patch matches a different neuron in the network. The commor\nwisdom of how ConvNets work is by assuming that simple low-level features are composed together to create\nmore and more complex features, where each subsequent layer denotes features of higher abstraction \u2014 the\nvisualization of our network clearly demonstrate this hypothesis to be true for our case, showing small strokes\niteratively being composed into complete digits.\n[Nias] 9] S| a} ] ele] &\nSm] | A | pe] Bg]\nOo] N15 [es] | Ufa Toe]\n] lene] | tyes | ek] os] en\nO) [ee] me] O)9 [So]\n@/\u2014-lals|e)als alate\n8] \u2014[4][s) ae] sles]o\nO[\u2014| 4] a) +] too] Nn] tel s.\nRy | pg oe bg |B | BN) 8a]\na) Nole| elena Slade\nFigure 8: Generated digits samples from the GHT-model trained on the MNIST dataset.\new DA Pai e NON RRS\nCe ere feds Eee ie ea\nPe ae a ee ee\n\na a or cell i\nRelaae. aed tan te\nHaw neh tac nemnar.\nwean dt take\n\nFA Yeo8e oy Ieyey\n\nym Boe DO bem\nIn he | he tw\nCARB\nDAwsQrarnas\nbe n\u2014 KOS\nSrAaArAaMe >\nO98 SAF OR\nFigure 9: Visualization of the GHT-model. Each of the images above visualize a different layer of the\nmodel and consists of several samples generated from latent variables at different spatial locations\nconditioned on randomly selected channels. The leftmost image shows samples taken from the 5th\nlayer which consists of just a single latent variable with 512 channels. The center image shows\nsamples taken from the 4th layer, which consists of 2 x 2 grid of latent variables with 256 channels\neach. The image is divided to 4 quadrants, each contains samples taken from the respective latent\nvariable at that position. The rightmost image shows samples from the 3rd layer, which consists\nof 4 x 4 grid of latent variables with 128 channels, and the image is similarly spatial divided into\ndifferent areas matching the latent variables of the layer."}, {"section_index": "23", "section_name": "1 RAW RESULTS OF EXPERIMENTS", "section_text": "Table 3: Blind classification with missing data on the binary MNIST dataset with feature deletion\nnoise according to/Globerson and Roweis| (2006), averaged over all pairs of digits.\nFor both presentational and page layout reasons we have chosen to present most of results in the form of charts\nin the body or the article. Considering that exact results are important for both reproducibility as well as future\ncomparisons to our work, we provide below the raw results of our experiments in the form of detailed tables.\nFor completeness, some of the tables we did include in the body of the article are duplicated to here as well.\nN=0 25 50 75 100. 125 = 150\n\nLP-Based 97.9 975 964 94.1 89.2 80.9 70.2\nGHT-model 98.5 98.2 97.8 965 93.9 87.1 76.3\np=0 0.25 0.50 0.75 0.90 0.95 0.99\n\nKNN 96.8 96.7 96.2 944 864 71.7 29.2\nZero + 99.2 97.3 88.2 58.6 28.7 19.5 12.6\nMean + 99.2 984 90.9 524 21.1 156 10.9\nGSN + 99.2 974 885 51.8 17.7) 126 10.1\nNICE + 99.2 98.9 97.9 82.6 363 20.2 11.7\nDPM + 99.2 99.0 98.2 89.4 47.7 25.7 12.7\nMP-DBM* 99.0 98.0 97.0 92.0 35.0 18.0 13.0\nGCP-model 96.6 964 95.7 92.2 79.8 66.5 31.2\nGHT-model 99.0 99.0 98.7 97.7 90.5 76.0 33.0\nTable 4: Blind classification with missing data on the multi-class MNIST dataset, generated accord-\ning to iid. corruption with probability p for each pixel. (*) Accuracies are estimated from the\n\nplot presented in {Goodfellow et al.|(2013). (+) Data imputation algorithms followed by a standard\n\nConvNet.\n(N,\\W)= (7) (27) G7) GID 21D G1) G15) (2,15) (15)\n\nKNN 96.6 94.0 87.1 95.9 90.3 76.7 95.0 86.1 65.0\nZero 93.0 74.9 47.6 86.2 56.2 31.2 78.6 44.2 22.6\nMean { 97.9 89.9 67.8 95.8 74.1 42.0 91.8 60.0 274\nGSN t 974 868 568 94.2 64.3 31.8 88.9 46.4 21.8\nNICE + 98.5 93.2 74.9 97.7 81.3 52.3 95.7 69.1 38.0\nDPM + 97.2 87.0 640 94.4 73.2 44.6 91.4 61.8 33.2\n\nGCP-model 96.0 93.1 85.0 95.1 88.7 23.3 94.5 83.7 62.4\nGHT-model 98.6 97.3. 91.2 98.3 93.7 79.1 98.0 89.6 67.2\nTable 5: Blind classification with missing data on the multi-class MNIST dataset, generated accord-\ning to missing rectangles corruption with N missing rectangles, each of width and hight equal to W.\n(+) Data imputation algorithms followed by a standard ConvNet.\nTable 6: Blind classification with missing data on the multi-class NORB dataset, generated accord-\ning to i.i.d. corruption with probability p for each pixel. (7) Data imputation algorithms followed by\na standard ConvNet.\n(N,\\W)= (7) (27) G7) GID 21D G1) G15) (2,15) (15)\n\nKNN 81.2 810 810 81.1 80.4 79.8 80.5 78.4 75.3\nZero 35.9 28.1 25.1 25.7 22.6 20.9 22.4 20.5 19.8\nMean {+ 81.9 73.0 66.6 63.2 49.6 41.9 45.7 32.5 25.9\nNICE + 96.1 95.3 93.7 92.1 81.4 67.4 73.8 46.4 33.0\nDPM + 90.1 81.9 742 65.9 46.0 34.3 37.7 24.2 20.9\n\nGHT-model 96.5 96.3 95.9 95.5 93.7 91.2 92.3 86.0 79.4\nTable 7: Blind classification with missing data on the multi-class NORB dataset, generated accord-\ning to missing rectangles corruption with N missing rectangles, each of width and hight equal to W.\n(+) Data imputation algorithms followed by a standard ConvNet.\nest\n\n_\u2014 0.25 0.50 0.75 0.90 0.95 0.99\nDusain\n0.25 98.9 97.8 78.9 324 17.6 11.0\n0.50 99.1 986 946 68.1 37.9 12.9\n0.75 98.9 98.7 97.2 83.9 564 16.7\n0.90 97.6 975 96.7 89.0 71.0 21.3\n0.95 95.7 95.6 948 88.3 74.0 30.5\n0.99 87.3. 86.7 85.0 78.2 66.2 313\niid. (rand) 98.7 984 97.0 87.6 70.6 29.6\nrects(rand) 98.2 95.7 83.2 54.7 35.8 17.5\nTable 8: We compare ConvNets on the MNIST dataset, trained on i.i.d. corruption with probability\nPtrain While tested on ii.d. corruption with probability pes. Additionally, we trained ConvNets\non either iid. or missing rectangles corruption distributions with random corruption parameters\nsampled for each batch of training samples, while testing on i.i.d. corruption with the fixed parameter\nDract .\np=0 0.25 0.50 0.75 0.90 0.95 0.99\n\nKNN 81.3 81.0 808 804 78.0 74.4 55.6\nZero + 968 19.3 19.7 20.0 20.0 20.0 19.7\nMean + 96.8 66.8 49.7 35.5 30.2 24.2 20.1\nNICE + 968 95.8 91.5 70.7 30.9 22.9 20.5\nDPM + 96.8 88.8 60.2 282 21.3 20.9 20.6\nGHT-model 96.7 96.6 94.9 84.0 67.9 58.1 41.2\nTable 9: We compare ConvNets on the MNIST dataset, train and tested on the same (fixed) missing\nrectangles distribution, against ConvNets trained on randomly chosen missingness distributions from\neither the missing rectangles or i.i.d. corruption distributions.\nTa OW)=01.8) 12) 168) G12) 216) BE) GI2)_B.16)\nrects (fixed) 98.7 97.7 93.1 98.6 94.7 82.0 98.2 90.5 70.5\nrects (rand) 99.0 97.6 92.3 98.4 94.6 80.1 98.0 90.0 66.9\nii.d. (rand) 97.8 94.8 83.4 96.8 88.6 64.5 96.1 80.6 49.5"}]
H1kjdOYlx
[{"section_index": "0", "section_name": "MODULAR MULTITASK REINFORCEMENT\u2019\nLEARNING WITH POLICY SKETCHES", "section_text": "Jacob Andreas, Dan Klein, and Sergey Levine\n(jda,klein, svlevine}@eecs.berkeley.edu\nWe describe a framework for multitask deep reinforcement learning guided by\npolicy sketches. Sketches annotate each task with a sequence of named subtasks,\nproviding high-level structural relationships among tasks, but not providing the\ndetailed guidance required by previous work on learning policy abstractions for\nRL (e.g. intermediate rewards, subtask completion signals, or intrinsic motiva-\ntions). Our approach associates every subtask with its own modular subpolicy,\nand jointly optimizes over full task-specific policies by tying parameters across\nshared subpolicies. This optimization is accomplished via a simple decoupled\nactor\u2014critic training objective that facilitates learning common behaviors from\ndissimilar reward functions. We evaluate the effectiveness of our approach on a\nmaze navigation game and a 2-D Minecraft-inspired crafting game. Both games\nfeature extremely sparse rewards that can be obtained only after completing a\nnumber of high-level subgoals (e.g. escaping from a sequence of locked rooms ot\ncollecting and combining various ingredients in the proper order). Experiments\nillustrate two main advantages of our approach. First, we outperform standard\nbaselines that learn task-specific or shared monolithic policies. Second, out\nmethod naturally induces a library of primitive behaviors that can be recombined\nto rapidly acquire policies for new tasks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "This paper describes a framework for learning\ncomposable deep subpolicies in a multitask set-\nting, guided only by abstract policy sketches.\nWe are interested in problems like the ones\nshown in with collections of tasks\nthat involve sparse rewards and long-term plan-\nning, but which share structure in the form of\ncommon subgoals or reusable high-level ac-\ntions. Our work aims to develop models that\ncan learn efficiently from these sparse rewards\nand rapidly adapt to new tasks, by exploiting\nthis shared structure and translating success on\none task into progress on others. Our approach\nultimately induces a library of high-level ac-\ntions directly from symbolic annotations like\nthe ones marked JX; and [x9 in the figure.\nThis approach builds on a significant body of\nresearch in reinforcement learning that focuses\non hierarchical representations of behavior. In\nthese approaches, a high-level controller learns\na policy over high-level actions\u2014known var-\n\niously as options (Sutton et al.| {1999}, skills"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "71: make planks Th 7: make sticks Tl2\n\nbi: get wood i ha bi: get wood i Ba\n\nba: use workbench m2 53: use toolshed 3\nFigure 1: Composing policies from subpolicies. Here\nwe have simplified versions of two tasks (make planks\nand make sticks, each associated with its own policy\n(Il, and Iz respectively). These policies share an ini-\ntial high-level action 61: both require the agent to get\nwood before taking it to an appropriate crafting station.\nBy enforcing that the agent initially follows the same\nsubpolicy 71 in both tasks, we can learn a reusable rep-\nresentation of their shared structure.\n(Konidaris & Barto}|2007), or primitives (Hauser et al. )\u2014-which are themselves implemente\n\nas policies over low-level actions in the environment. While one line of research (e.g. |Danie\n(2012)) investigates learning hierarchical policies without any supervision, such hierarchie\nare empirically difficult to learn directly from unconstrained interaction . The bulk 0\nexisting work instead relies on additional information (in the form of interme: wards, subtas|\ncompletion signals, or intrinsic motivations) that guide the learner toward useful high-level actions\nWhile effective, these approaches depend on state representations simple or structured enough tha\nsuitable reward signals can be effectively engineered by hand.\nHere we focus on multitask learning of hierarchical policies from a weaker form of supervision: a\ntraining time, each task (7, and 72 in|F s annotated with a sketch (/\u00a2; and K\u20182) consisting of <\nsequence of high-level action symbols (6), 62 and b3)\u2014with no information about how these actions\nshould be implemented. Our approach associates each such high-level action with its own low.\nlevel subpolicy, and jointly optimizes over concatenated task-specific policies by tying parameter:\nacross shared subpolicies. Our thesis is that even the minimal information about high-level policy\nstructure contained in a sketch provides enough of a learning signal to induce general, reusable\nsubpolicies. Crucially, sketches are totally ungrounded in the representation of the world\u2014they\nrequire no intervention in a simulator or environment model.\nThe present work may be viewed as an extension of recent approaches for learning compositional\n\ndeep architectures from structured program descriptors (Andreas et al. 2016} Reed & de Freitas}\n\n2015). Here we focus on learning in interactive environments with reinforcement training signals.\nThis extension presents a variety of technical challenges. Concretely, our contributions are:\nWe evaluate our approach on two families of tasks: a maze navigation game (Figure 3h), in whict\nthe agent must navigate through a sequence of locked doors to reach a target room; and a 2-D\nMinecraft-inspired crafting game (Figure 3p), in which the agent must acquire particular resource:\nby finding raw ingredients, combining them together in the proper order, and in some cases building\nintermediate tools that enable the agent to alter the environment itself. In both games, the agen\nreceives a reward only after the final goal is accomplished. For the most challenging tasks, involving\nsequences of four or five high-level actions, a task-specific agent initially following a random policy\nessentially never discovers the reward signal.\nWe evaluate a modular agent architecture trained with guidance from policy sketches under severa\ndifferent data conditions: (1) when learning the full collection of tasks jointly via reinforcement, (2\u00b0\nin a zero-shot setting where a policy sketch is available for the held-out task, and (3) in a adaptatior\nsetting, where sketches are hidden and the agent must learn a policy over high-level actions. In al\ncases, our approach substantially outperforms standard policy optimization baselines."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "The agent representation we describe in this paper belongs to the broader family of hierarchical\nreinforcement learners described in the literature. As detailed in our subpolicies may be\nviewed as a relaxation of the options framework first described by (1999). A large body\nof work describes techniques for learning options and related abstract actions, in both single- anc\nmultitask settings. For learning the implementation of options, most techniques rely on intermediate\nsupervisory signals, e.g. to encourage exploration or completion of pre-\ndefined subtasks (Kulkarni et al.|[2016). An alternative family of approaches employs either post\nhoc analysis of already-learned policies to extract reusable sub-components\n\nKonidaris et al.||2011). Techniques for learning options with less guidance than the present work\ninclude|Bacon & Precup|(2015) and (2016), and other general hierarchical policy\nlearners include |Daniel et al.](2012), [Bakker & Schmidhuber|(2004) and/Menache et al_](2002).\nOnce a library of high-level actions exists, agents are faced with the problem of learning high-level\n(typically semi-Markov) policies that invoke appropriate high-level actions in sequence (Precup!\ne A general paradigm for multitask, hierarchical, deep reinforcement learning guided by ab-\nstract sketches of task-specific policies.\n\ne A concrete agent architecture for learning in this paradigm, featuring a modular model\nstructure and multitask actor\u2014critic training objective.\n(2000). The learning problem we describe in this paper is in some sense the direct dual to the\nproblem of learning these high-level policies. There, the agent begins with an inventory of complex\nprimitives and must learn to model their behavior and select among them; here we begin knowing\nthe names of appropriate high-level actions but nothing about how they are implemented, and must\ninfer implementations (but not, initially, high-level plans) from context. We expect that our approach\ncould be coupled with a generic learner of options policies to provide a general mechanism for\nhierarchical RL; we leave this for future work.\nOur approach is also inspired by a number of recent efforts toward compositional reasoning anc\ninteraction with structured deep models. Such models have been previously used for tasks involvin;\nquestion answering (Iyyer et al. 2014} Andreas et al. 2016) and relational reasoning (Socher et al.\n), and more recently for multi-task, multi-robot transfer problems oat ar I\nthis work\u2014as in existing approaches employing dynamically assembled modular networks\u2014task\nspecific training signals are propagated through a collection of composed discrete structures witl\ntied weights. Here the composed structures specify time-varying policies rather than feedforwar\u00ab\ncomputations, and their parameters must be learned via interaction rather than direct supervisi\n\nAnother closely related family of models includes neural programmers (Neelakantan et al.\nand programmer-interpreters (Reed & de Freitas , which generate discrete computationa\n\nstructures but require sunervision in the form of outnut actions or full execution traces.\nA closely related line of work is the Hierarchical Abstract Machines (HAM) framework introduce\nby |Parr & Russell] (1998). Like our approach, HAMs begin with a representation of a high-levi\npolicy as an automaton (or a more general computer program; |Andre & Russell} |2001) and us\n\nreinforcement learning to fill in low-level details. Variations on this architecture have considered\nnumber of control constructs beyond the scope of the current paper (e.g. concurrency and recursiot\n\nMarthi et al.|/2004). However, because these approaches attempt to learn a single representation \u00ab\nthe Q functio\n\nn for all subtasks and contexts, they require extremely strong formal assumptions abot\n\nthe form of the reward function and state representation (Andre & Russell} |2002) that the preset\n\nwork avoids by decoupling the policy representation from the value function.\nOur approach also bears some resemblance to the instruction following literature in natural language\nprocessing. Existing work on instruction following falls into two broad categories: approaches that\nrequire a highly structured (typically logical) action and world representations\n\nrequire detailed supervision of action sequences or dense reward signals essentially equivalent to\nthe framework we describe here involves no formal or logical language for describing plans, and\nno supervised action sequences. Additionally, the modular model described in this paper natrually\nsupports adaptation to tasks where no sketches are available, while all existing instruction following\nmodels learn a joint policy over instructions and actions, and are unable to function in the absence\nof instructions.\nWe consider a multitask reinforcement learning problem arising from a family of infinite-horizon\ndiscounted Markov decision processes in a shared environment. This environment is specified by\na tuple (S, A, P,y), with S a set of states, A a set of low-level actions, P: Sx Ax S > R\na transition probability distribution, and > a discount factor. Each task r \u20ac T is then specified\nby a pair (R,,p,), with R, : S + Ra task-specific reward function and p, : S \u2014 R an initial\ndistribution over states. For a fixed sequence {(s;, a;) } of states and actions obtained from a rollout\nof a given policy, we will denote the empirical return starting in state s; as qj := ye 77 R(s;). In\naddition to the components of a standard multitask RL problem, we assume that tasks are annotated\nwith sketches K,, each consisting of a sequence (b;1,b;2,...) of high-level symbolic labels drawn\nfrom a fixed vocabulary B. Our model associates each of these symbols with a randomly initialized\nmodular subpolicy. By sharing each subpolicy across all tasks annotated with the corresponding\nsymbol, our approach naturally learns the shared abstraction for the corresponding subtask, without\nrequiring any information about the erounding of that task to be explicitly specified by annotation."}, {"section_index": "4", "section_name": "3.1 MODEL", "section_text": "We exploit the structural information provided by sketches by constructing for each symbol b a\ncorresponding subpolicy 7. At each timestep, a subpolicy may select either a low-level action\na \u20ac Aora special STOP action. We denote the augmented state space At := AU {sToP}. While\nthis framework is agnostic to the implementation of subpolicies, we are especially interested in the\ncase where subpolicies are specified by deep networks. As shown in [Figure 2} the experiments\nin this paper represent each 7, as a neural network whose input is a representation of the current\nstate, and whose output is a distribution over A+. While all action spaces in our experiments are\ndiscrete, it is straightforward to instead allow this last layer to parameterize a mixed distribution\nover an underlying continuous action space and the STOP action. These subpolicies may be viewed\nas options of the kind described by [Sutton et al.] , with the key distinction that they have no\ninitiation semantics, but are instead invokable everywhere, and have no explicit representation as\na function from an initial state to a distribution over final states (instead implicitly using the STOP\naction to terminate).\nGiven a sketch, a task-specific policy II, is formed by concatenating its associated subpolicies ir\nsequence. In particular, the high-level policy maintains a subpolicy index i (initially 0), and executes\nactions from 7, until the STOP symbol is emitted, at which point control is passed to 7,,,. We may\nthus think of IT, as inducing a Markov chain over the state space S x B, with transitions given by:\n(s, bi) (s',bi) with probability $7, <4 7; (a|s) - P(s'|s,@\n\n>\n\u2014 (s,b;41) with probability 7\u00bb, (STOP|s)\nNote that II, is semi-Markov with respect to projection of the augmented state space S x B onto the\nunderlying state space S. We denote the complete family of task-specific policies II := L), {I-},\nand let each 7 be an arbitrary function of the current environment state parameterized by some\nweight vector 0,. The learning problem is to optimize over all 4, to maximize the sum of expected\ndiscounted rewards J(I1) := S*_ J(II,) := S*_ Es..n.! 30, 7/R-(s;)] across all tasks 7 \u20ac T.\nHere that optimization is accomplished via a simple de-\ncoupled actor\u2014critic method. In a standard policy gradi-\nent approach, with a single policy 7 wit 0,\nwe compute gradient steps of the form (\n\nVod(m) = >> (Vo log m(ai\\si)) (ai \u2014 e(s)),\n\na\n\nwhere the baseline or \u201ccritic\u201d c can be chosen indepen-\nHere that optimization is accomplished via a simple de-\ncoupled actor\u2014critic method. In a standard policy gradi-\nent approach, with a single policy 7 with parameters 0,\nwe compute eradient steps of the form (Williams!/1992):\nVoJ(m) = Ss (Vo log m(ai|si)) (ai \u2014 e(s)).\n\ni\nyhere the baseline or \u201ccritic\u201d c can be chosen indepen-\nently of the future without introducing bias into the gra-\nient. Recalling our previous definition of g; as the em-\nirical return starting from s;, this form of the gradient\norresponds to a generalized advantage estimator\nvan et al.| 2015) with \\ = 1. Here c achieves close to the\nptimal variance (Greensmith et al. 2004) when it is set\nxactly equal to the state-value function V,(s;) = E,xqi\nor the target policy 7 starting in state s;.\nThe situation becomes slightly more complicated when generalizing to modular policies built by\nsequencing subpolicies. In this case, we will have one subpolicy per symbol but one critic per\ntask. This is because subpolicies 7\u00bb might participate in a number of composed policies II,, each\nassociated with its own reward function R,. Thus individual subpolicies are not uniquely identified\nwith value functions, and the aforementioned subpolicy-specific state-value estimator is no longer\nwell-defined. We extend the actor\u2014critic method to incorporate the decoupling of policies from value\nfunctions by allowing the critic to vary per-sample (that is, per-task-and-timestep) depending on the\nreward function with which the sample is associated. Noting that Vo, JIL) = 0 y.,ex, Vo. (H-),\ni.e. the expected reward across all tasks in which 7, participates, we have:\nVoJ (II = 2. Vo (Th - Ld Vo, log mo(a7i|$7i) IC 4 \u2014 (sri).\n\u2014\u2122 bI_as\nSSP\n\nen-\nFigure 2: Model overview. Each subpol-\nicy 7 is uniquely associated with a symbol b\nimplemented as a neural network that maps\nfrom a state s; to distributions over At, and\nchooses an action a; by sampling from this\ndistribution. Whenever the STOP action is\nsampled, control advances to the next sub-\npolicy in the sketch.\nwhere each state-action pair (s;;,@,;) was selected by the subpolicy 7 in the context of the task T\nNow minimization of the gradient variance requires that each c, actually depend on the task identity\n(This follows immediately by applying the corresponding argument in\nindividually to each term in the sum over 7 in [Equation 2]) Because the value function is itsel:\nunknown, an approximation must be estimated from data. Here we allow these c, to be implementec\nwith an arbitrary function approximator parameterized by a vector 77,. This is trained to minimize <\nsquared error criterion, with gradients given by\nv5 dae c(8;)) *] =D (Pn ex(s0)(a\u00b0~ ero)\nAlternative forms of the advantage estimator (e.g. the TD residual R;(s;) + V-(si41) \u2014 yV+(si)\nor any other member of the GAE family) can be easily substituted by simply maintaining one such\n\nestimator per task. Experiments (Section 4.3) show that conditioning on both the state and the\ntask identity results in noticeable performance improvements, suggesting that the variance reduction\nprovided by this objective is important for efficient joint learning of modular policies.\nAlgorithm 1 DO-STEP(L1, curriculum)\n1: Deo\n\nwhile |D| < D do\n\n7 ~ curriculum(-)\n\n: > sample task 7 from curriculum (Section 3..\n4: d= {(si,ai,b: = K,i,gi,7),---}~ TL B do rollo\n5: D+<+Dud\n\n6: for b \u20ac B,r \u20ac T do\n\n7: d= {(si,ai,b',qi,7/) ED: b' =b,7' = T}\n\n8: %-\u2014%- Fa (Vv log 7 (ai|Si)) (a _ cr(si)) > update polic\n9:\n\nWe & Nr \u2014 & Dg (Ver(si)) (qi \u2014 er(8i)) > update crit\nThe complete procedure for computing a single gradient step is given in|Algorithm 1] (The oute\ntraining loop over these steps, which is driven by a curriculum learning procedure, is described ii\n\nthe following section and specified in{Algorithm 2]) This is an on-policy algorithm. In each step, th\nagent samples tasks from a task distribution provided by a curriculum (described in the followin:\nsubsection). The current family of policies II is used to perform rollouts in each sampled task\naccumulating the resulting tuples of (states, low-level actions, high-level symbols, rewards, and tas!\nidentities) into a dataset D. Once D reaches a maximum size D, it is used to compute gradient\nw.r.t. both policy and critic parameters, and the parameter vectors are updated accordingly. The ste}\nsizes a and ( in|Algorithm I|can be chosen adaptively using any first-order method."}, {"section_index": "5", "section_name": "3.3. CURRICULUM LEARNING", "section_text": "For complex tasks, like the one depicted in|F b, it is difficult for the agent to discover any states\nwith positive reward until many subpolicy behaviors have already been learned. It is thus a better use\nof the learner\u2019s time to focus on \u201ceasy\u201d tasks, where many rollouts will result in high reward from\nwhich appropriate subpolicy behavior can be inferred. But there is a fundamental tradeoff involved\nhere: if the learner spends too much time on easy tasks before being made aware of the existence\nof harder ones, it may overfit and learn subpolicies that no longer generalize or exhibit the desired\nstructural properties.\nTo avoid both of these problems, we use a curriculum learning scheme (2009) that\nallows the model to smoothly scale up from easy tasks to more difficult ones while avoiding overfit-\nting. Initially the model is presented with tasks associated with short sketches. Once average reward\non all these tasks reaches a certain threshold, the length limit is incremented. We assume that re-\nwards across tasks are normalized with maximum achievable reward 0 < q; < 1. Let Er, denote\nthe empirical estimate of the expected reward for the current policy on task t. Then at each timestep,\ntasks are sampled in proportion to 1 \u2014 Er,, which by assumption must be positive. Experiments\nshow that both components of this curriculum learning scheme improve the rate at which the model\n\nconverges to a good policy (Section 4.3).\nThe complete curriculum-based training procedure is specified in [Algorithm 2] Initially, the max\nimum sketch length @max is set to one, and the curriculum initialized to sample length-1 tasks uni\nformly. (Neither of the environments we consider in this paper feature any length-1 tasks; in thi\ncase, observe that[MIgorithm Z]will simply advance to length-2 tasks without any parameter updates.\nFor each setting of (max, the algorithm uses the current collection of task policies TI to compute an\napply the gradient step described in The rollouts obtained from the call to DO-STE\ncan also be used to compute reward estimates Er; these estimates determine a new task distributio\nfor the curriculum. The inner loop is repeated until the reward threshold rmin is exceeded, at whic\npoint fax is incremented and the process repeated over a (now-expanded) collection of tasks.\nAs described in the introduction, we evaluate the performance of our approach in two environments\na maze navigation game and a crafting game. Both games involve nontrivial low-level control\nagents must learn to avoid obstacles and interact with various kinds of objects. But the environment:\nalso feature hierarchical structure: rewards are accessible only after the agent has completed two t\u00ab\nfive high-level actions in the appropriate sequence.\nIn all our experiments, we implement each subpolicy as a multilayer perceptron with ReLU nonlin.\nearities and a hidden layer with 128 hidden units, and each critic as a linear function of the curren\nstate. Each subpolicy network receives as input a set of features describing the current state of the\nenvironment, and outputs a distribution over actions. The agent acts at every timestep by sampling\nfrom this distribution. The gradient steps given in lines 8 and 9 offAlgorithm Tare implemented us:\ning RMSPRoP with a step size of 0.001 and gradient clipping to a unit norm. We\n\ntake the batch size parameter D in|Algorithm 1|/to be 2000, and set 7 = 0.9 in both environments\nFor curriculum learning, the improvement threshold reooq is set to 0.8.\nThe maze environment (Figure 3h) corresponds closely to the the \u201clight world\u201d described b\nKonidaris & Barto (2007). The agent is placed in a discrete world consisting of a series of rooms\nsome of which are connected by doors. Some doors require that the agent first pick up a key t\nopen them. For our experiments, each task corresponds to a goal room (always at the same positiot\nrelative to the agent\u2019s starting position) that the agent must reach by navigating through a sequenc'\nof intermediate rooms. The agent has one sensor on each side of its body, which reports the distanc'\nto keys, closed doors, and open doors in the corresponding direction. Sketches specify a particula\nsequence of directions for the agent to traverse between rooms to reach the goal. Mazes are sample\nwith random sizes and random decisions about whether to connect rooms with open doors, locke\ndoors, or no doors. The sketch always corresponds to a viable traversal from the start to the goa\nposition, but other (possibly shorter) traversals may also exist.\nThe crafting environment (Figure 3p) is inspired by the popular game Minecraft, but is imple-\nmented in a 2-D grid world. The agent may interact with some objects in the world by facing them\nSOP 2 TL RAIN-POLICIES()\n\n: T= mir() > initialize subpolicies random]\n> Lmax < 1\n: loop\n: Tmin <~ OO\ncurriculum(-) = Unif(7\") > initialize \u20acmax-step curriculum uniforml:\n\nT ={r eT: |Kz| < lmax}\nwhile rmin < Tgooa do\nDO-STEP(II, curriculum) > update parameters (Algorithm T\n5 B= yerlt Ere] ;\n: curriculum(t) = 1[7 \u20ac T\u2019(1\u2014Er,)/Z VreT\n: Tmin <- Min, Er,\nbrmax lmax + 1\nFigure 3: Example tasks from the environments used in this paper. (a) In the maze environment, the agent mus\nreach a goal position by traversing right (1), down (2) and down again (3) through a sequence of rooms, som\nof which may have locked doors. (b) In the crafting environment, an agent seeking to pick up the gold nugge\nin the top corner must first collect wood (1) and iron (2), use a workbench to turn them into a bridge (3), an\nuse the bridge to cross the water (4).\nand executing a special INTERACT action. Interacting with raw materials initially scattered around\nthe environment causes them to be added to an inventory. Interacting with different crafting stations\ncauses objects in the agent\u2019s inventory to be combined or transformed into other objects. Each task\nin this game corresponds to some crafted object the agent must produce; the most complicated goals\nrequire the agent to also craft intermediate ingredients, and in some cases build tools (like a pickaxe\nand a bridge) to reach ingredients located in initially inaccessible regions of the environment.\nA complete listing of tasks and sketches is given inf[Appendix A]"}, {"section_index": "6", "section_name": "4.2 MULTITASK LEARNING", "section_text": "The primary experimental question in this paper is whether the extra structure provided by policy\nsketches alone is enough to enable fast learning of coupled policies across tasks. To evaluate this, we\ncompare our modular approach to two policy gradient baselines\u2014one that learns an independent\npolicy for each task and one that learns a joint policy across all tasks\u2014as well as a critic-only Q\nreader baseline. For the independent model, task-specific policies are represented by networks witt\nthe same structure as the modular subpolicies. The joint model conditions both on these environmen\nfeatures, as well as a feature vector encoding the complete sketch. The Q reader forms the same join\nstate and action space described in[Section 3.1] and learns a single feedforward network to map fron\nboth environment states and representations of action symbols onto Q values. This baseline can be\nviewed either as a chain-structured hierarchical abstract machine with a learned state abstracto1\n(Andre & Russell] |2002), or as a standard instruction following baseline from the natural language\n\nprocessing literature (Vogel & Jurafsky|\nAvg. reward,\n\nMaze environment Crafting environment Crafting environment (by task)\n\nLo Lo\n\u2014 Qreader \u2014 reader\nos | \u2014 Modotar (ours os | \u2014 Modular (ours)\nJoint Joint\n\nos Indep. Zoe Indep. E\n\u00e9 \u00e9\na\n\nos gos \u00a3\n\n02 02\n\n00 00\n\noo 05 10 15 20 25 30 00 101s 20 25 30 S10 15 20 25 30\nEpisode x10\" Episode x10? Episode x10\nFigure 4: Comparing modular learning from sketches with standard RL baselines. Modular is the approach\ndescribed in this paper, while Independent learns a separate policy for each task, Joint learns a shared policy\nthat conditions on the task identity, and Q reader learns a single network to map from states and action symbols\nto Q values. Performance for the best iteration of the (off-policy) Q reader is plotted. (a) Performance of\nthe three models in the maze environment. (b) Performance in the crafting environment. (c) Individual task\nperformance for the modular model in the crafting domain. Colors correspond to task length. It can be seen that\nthe sharp steps in the learning curve correspond to increases of max in the curriculum. The modular approach\nis eventually able to achieve high reward on all tasks, while the baseline models perform considerably worse\non average.\n7: go to goal\nby: right\nbo: down\n\nbs: down\n\n(b)\n\nfc\nby\nby:\nbs:\nba:\n\nget gold\n\n: get wood\n\n: get iron\n\n: use workbench\n\n: get gold\nAvg. reward,\n\nMaze environment Crafting environment Crafting environment (by task)\n10 10\n\u2014 Qreader \u2014 Qreader\n0.8] \u2014 Modular (ours) 0.8.) \u2014 Modular (ours)\nJoint Joint\n06 | 7777 Indep. Bog | 777 Inder. E\nB a\n04 wos eb\n4 4\n02 02 /\n00 00\n00 05 10 15 20 25 30 00 10 15 20 25 30 5S 10 15 20 25 30\nEpisode x10\u00b0 Episode x10\u00b0 Episode x10\n\n(a) (b) (c)\nAvg. reward,\n\n\u2014 {task state}\n+ {task}\n\u2014 {state}\n\n\u2014 {lensth, weight)\nso=+ {weight}\n\u2014 fength}\n\nAvg. reward,\n\noo 05 10 15) 20 25 30\n\n00 05) 6-10) 1S) 02S 30\nEpisode x10\" Episode 108\n\nyy TAY\nFigure 5: Ablation experiments. (a) The critic: lines labeled \u201ctask\u201d include a baseline that varies with the task\nidentity, while lines labeled \u201cstate\u201d include a baseline that varies with the state identity. Estimating a baseline\nthat depends on both the representation of the current state and the identity of the current task is better thar\neither alone or a constant baseline. (b) The curriculum: lines labeled \u201clength\u201d use a curriculum with iteratively\nincreasing lengths, while lines labeled \u201cweight\u201d sample tasks in inverse proportion to their current reward\nAdjusting the sampling distribution based on both task length and performance return improves convergence.\nLearning curves for baselines and the modular model are shown in 4] It can be seen that ir\nboth the maze domain and the crafting domain, our approach substantially outperforms the baselines:\nit induces policies with substantially higher average reward and converges more quickly than the\npolicy gradient baselines. It can further be seen in[Figure 4p that after policies have been learned or\nsimple tasks, the model is able to rapidly adapt to more complex ones, even when the longer tasks\ninvolve high-level actions not required for any of the short tasks (Appendix Ap.\nHaving demonstrated the overall effectiveness of our approach, our remaining experiments explore\n(1) the importance of various components of the training procedure, and (2) the learned models\u2019\nability to generalize or adapt to held-out tasks. For compactness, we restrict our consideration on\nthe crafting domain, which features a larger and more diverse range of tasks and high-level actions.\nIn addition to the overall modular parameter-tying structure induced by our sketches, the key com-\nponents of our training procedure are the decoupled critic and the curriculum. Our next experiments\ninvestigate the extent to which these are necessary for good performance.\nTo evaluate the the critic, we consider three ablations: (1) removing the dependence of the model on\nthe environment state, in which case the baseline is a single scalar per task; (2) removing the depen-\ndence of the model on the task, in which case the baseline is a conventional generalized advantage\nestimator; and (3) removing both, in which case the baseline is a single scalar, as in a vanilla policy\ngradient approach. Results are shown in|Figure 5p. Introducing both state and task dependence into\nthe baseline leads to faster convergence of the model: the approach with a constant baseline achieves\nless than half the overall performance of the full critic after 3 million episodes. Introducing task and\nstate dependence independently improve this performance; combining them gives the best result.\nWe also investigate two aspects of our curriculum learning scheme: starting with short examples\nand moving to long ones, and sampling tasks in inverse proportion to their accumulated reward.\nExperiments are shown in [Figure 5. We again see that both components are essential for good\nperformance. Sampling uniformly across all tasks of the target length results in slow convergence.\nIn our final experiments, we consider the model\u2019s ability to generalize to new tasks unseen at training\ntime. We consider two evaluation conditions: a zero-shot setting, in which the model is provided <\nsketch for the new task and must immediately achieve good performance, and a adaptation setting\nin which no sketch is provided and the model must learn the form of a suitable sketch by interacting\nwith the new task.\n10\n\u2014 {task state}\nos] ---~ (tsk)\n\u2014 {state}\noO\n\n00 OS\n\nCri\n\n0 4s) 2002530\nEpisode 108\n\n(a)\n\nAvg. reward,\n\n\u2014 {lensth, weight)\n\n+ {weight}\n\n\u2014 fength}\n\nLs\nEpisode\n\n(b)\n\n20\nTable 1: Model performance under var-\nious evaluation conditions. MT is the\nmultitask training condition described\nin[Section 4.2] while 0-S and Ad. are re-\nspectively the zero-shot and adaptation\nexperiments described in|Section 4.4]"}, {"section_index": "7", "section_name": "5 CONCLUSIONS", "section_text": "We have described an approach for multitask learning of neural network policies guided by symbolic\npolicy sketches. By associating each symbol appearing in a sketch with a modular neural subpolicy,\nwe have shown that it is possible to build agents that share behavior across tasks in order to achieve\nsuccess in tasks with sparse and delayed rewards. This process induces an inventory of reusable and\ninterpretable subpolicies which can be employed for zero-shot generalization when further sketches\nare available, and hierarchical reinforcement learning when they are not. Our work suggests that\nthese sketches, which are easy to produce and require no grounding in the environment, provide an\n\neffective scaffold for learning hierarchical policies from minimal supervision. We have released our\nnade atiheter:\u00ab //wathuakh weam /aaenkoarnaAyvene /rabantrkh"}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "JA is supported by a Facebook Graduate Fellowship and a Huawei / Berkeley AI fellowship."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "David Andre and Stuart Russell. Programmable reinforcement learning agents. In Advances in\nNeural Information Processing Systems, 2001.\nJacob Andreas and Dan Klein. Alignment-based compositional semantics for instruction following\na ae a a \u2014 Ants\nJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural\nnetworks for question answering. In Proceedings of the Annual Meeting of the North American\nChapter of the Association for Computational Linguistics, 2016.\nYoshua Bengio, J\u00e9r6me Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. pp.\n41-48. ACM, 2009.\n76\n\nvar-\n; the\nibed\ne re-\ntion\n\n4|\n\nWe hold out two length-four tasks from the full inventory used\nin [Section 4.2] and train on the remaining tasks. For zero-\nshot experiments, we simply form the concatenated policy de-\nscribed by the sketches of the held-out tasks, and repeatedly\nexecute this policy (without learning) in order to obtain an\nestimate of its effectiveness. For adaptation experiments, we\nconsider ordinary reinforcement learning over 6 rather than\nA, implementing the high-level learner with the same agent\narchitecture as described in[Section 3.1] Note that the Inde-\npendent baseline cannot be applied to the zero-shot evalua-\ntion, while the joint baseline cannot be applied to the adapta-\ntion baseline (because it depends on pre-specified sketch fea-\n\na The held-out tasks are sufficiently challenging that the baselines\negligible reward, while the modular model does comparatively well.\n>ierre-Luc Bacon and Doina Precup. The option-critic architecture. In NJPS Deep Reinforcemen\nLearning Workshop, 2015.\nS.R.K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. Reinforcement learning\nfor mapping instructions to actions. In Proceedings of the Annual Meeting of the Association for\nComputational Linguistics, pp. 82-90. Association for Computational Linguistics, 2009.\nfor mapping instructions to actions. In Proceedings of the Annual Meeting of the Association for\nComputational Linguistics, pp. 82-90. Association for Computational Linguistics, 2009.\n\nDavid L. Chen and Raymond J. Mooney. Learning to interpret natural language navigation instruc-\ntions from observations. In Proceedings of the Meeting of the Association for the Advancement\nof Artificial Intelligence, volume 2, pp. 1-2, 2011.\n\nChristian Daniel, Gerhard Neumann, and Jan Peters. Hierarchical relative entropy policy search. In\nProceedings of the International Conference on Artificial Intelligence and Statistics, pp. 273-281.\n2012.\n\nColine Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, and Sergey Levine. Learning modular\nneural network policies for multi-task and multi-robot transfer. arXiv preprint arXiv: 1609.07088.\n2016.\n\nEvan Greensmith, Peter L Bartlett, and Jonathan Baxter. Variance reduction techniques for gradient\nestimates in reinforcement learning. Journal of Machine Learning Research, 5(Nov):1471-1530.\n2004.\n\nKris Hauser, Timothy Bretl, Kensuke Harada, and Jean-Claude Latombe. Using motion primitives in\nprobabilistic sample-based planning for humanoid robots. In Algorithmic foundation of robotics.\npp. 507-522. Springer, 2008.\n\nBernhard Hengst. Discovering hierarchy in reinforcement learning with HEXQ. In JCML, volume 2.\nan 914295 INN9\nRon Parr and Stuart Russell. Reinforcement learning with hierarchies of machines. In Advances in\nNeural Information Processing Systems. 1998.\nDoina Precup. Temporal abstraction in reinforcement learning. PhD thesis, 2000\nGeorge Konidaris and Andrew G Barto. Building portable options: Skill transfer in reinforcement\nlearning. In LJCAI, volume 7, pp. 895\u2014900, 2007.\nMartin Stolle and Doina Precup. Learning options in reinforcement learning. In International\nSymposium on Abstraction, Reformulation, and Approximation, pp. 212-223. Springer, 2002.\nRichard S Sutton, Doina Precup, and Satinder Singh. Between MDPs and semi-MDPs: A framewor!\nfor temporal abstraction in reinforcement learning. Artificial intelligence, 112(1):181\u2014211, 1999\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement\nlearning. Machine learning, 8(3-4):229-256, 1992.\nStefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R. Walter, Ashis Gopal Banerjee, Seth\nTeller, and Nicholas Roy. Understanding natural language commands for robotic navigation and\nmobile manipulation. In Jn Proceedings of the National Conference on Artificial Intelligence,\n2011."}, {"section_index": "10", "section_name": "A TASKS AND SKETCHES", "section_text": "The complete list of tasks, sketches, and symbols is given below. Tasks marked with an asterisk* are\nheld out for the generalization experiments described in but included in the multitask\ntraining experiments in Sections/4.2|and{4.3]\nleft\ndown\ndown\nleft\nright\nright\nright\nleft\ndown\n\nup\n\nup\nup\ndown\ndown\nright\n\nGoal Sketch\nMaze environment\n\ngoall left\ngoal2 left\ngoal3 right\ngoal4 up\n\ngoal5 up\n\ngoal6\u00e9 up\n\ngoal7 down\ngoal8 left\ngoal9 right\ngoall0 left\nCrafting environment\n\nmake plank get wood\nmake stick get wood\nmake cloth get grass\nmake rope get grass\nmake bridge get iron\nmake bed* get wood\nmake axe* get wood\nmake shears get wood\nget gold get iron\n\naqet a\n\nem\n\naet wood\n\nuse toolshed\nuse workbench\nuse factory\nuse toolshed\nget wood\n\nuse toolshed\nuse workbench\nuse workbench\nget wood\n\nWee workhench\n\nuse factory\n\nget grass use workbench\n\nget iron use toolshed\n\nget iron use workbench\n\nuse factory use bridge\n\ncet iron lee toolshed ice\n\naxe"}]